I would like to be a host in this group. Any objections?
I'm a long-time geek in all three areas.
DocMac
(1,628 posts)HopeHoops
(47,675 posts)I think that's sort of the point. If we can help each other evaluate our work and get through problems, it is a good thing. I'm a database geek from the days of Ashton-Tate dBase on the Apple II. I've coded in damn near everything of significance (and a lot of obscure things too), and I STILL hand-code HTML and CSS.
We can help each other.
DocMac
(1,628 posts)Be it manual or automation. Mostly HP/Mercury tools.
Will your group fall under the computer and internet topic?
I'm not clear how all this works just yet.
HopeHoops
(47,675 posts)Testing requires a level of trust, and very clear expectations about what is to be tested and on what platform. Give it a shot!
boppers
(16,588 posts)Very different paradigm, basically every feature, action, inteface, (etc.) has a test written *before* anything is coded. By the time you get to thousands of lines/files/whatever, there's a huge test library written, so a bug in one line of code that breaks an obscure feature (that seems totally unrelated) that's only used once every six months is caught more or less immediately.
http://en.wikipedia.org/wiki/Test-driven_development
HopeHoops
(47,675 posts)Object Oriented languages have gone a long way to solve lost memory issues. In the mid 80s, I wrote a rather efficient memory manager that would release all memory on normal termination. Essentially it set aside blocks of memory the size of an object and chained the blocks. Within the blocks it used linked lists (part of the object allocation) to track free space. If anything failed to release memory, the termination sequence would walk through all of the object lists and release all of the blocks - boom.
Most of that is handled by the OS now and you have to get tricky to get around the code/data space division to do anything like self-modifying code. It's sort of a no-no now, but it was a shitload of fun 20+ years ago.
As for TDD, the techniques predate the name by about two decades. Most of us were doing some form of it, just not calling it that. The paradigm breaks down on slow machines and tight deadlines, and a lot of the "library" usage was copy/paste from library text files for compile/link execution speed. That's why I said the drawbacks are fading with time. A 4.77 MHz x86 machine with 4K of memory just doesn't touch a six core i7 at 3.33 GHz and 24GB. Automated testing was mostly a matter of "hurry up and wait". Now the biggest problem is not thinking of a failure case when writing the test routine.