W. Edwards Deming = Groundbreaking Leader

September 17, 2023

Framework to Parse External Configuration Files

August 29, 2023

them: “We need to use this framework to parse configuration files!”

me: “Why?”

them: “Because mapping configuration file entries to code constructs in our implementation, and getting the data loaded from file into runtime, is complex. And most projects need it. So it’s an excellent candidate for cross-project reuse!”

me: “Why do you feel that you need to ‘externalize’ configuration data into separate files?”

them: “Because this is a Recognized Best Practice in our industry!”

me: “What practical benefits do you expect to receive from this so-called ‘best practice’? And at what cost(s)?”

them: “External configuration files are more readable and maintainable than code (in our chosen development language)!”

me: “Why do you choose to use an implementation language that makes it impossible to write readable maintainable code?”

them: “But non-developers need to read, understand, and update the configuration parameters!”

me: “Do they, actually in reality, do this?”

them: “Well they otta!!!”

me: “Really?”

them: “But some of the configuration items need to be changeable after and independently of ‘configuration build time’ — by the ‘operations’ group!”

me: “Now there you *have a good point*. Let’s put *just those things* in external configuration entries, managed on the servers. This will make it much more costly and difficult to ‘manage’ these values across servers. But this cost is well justified.”

them: “What?!? We can’t deal with any kinds of ‘down sides’ like you mention! You have to ‘fix’ this! Why are you doing this to us?”

me: “It seems that we’re going to have to have a talk about how these systems work, in reality.”

them: “So you’re ‘OK’ with extracting all the rest of the configuration parameters into ‘external’ files, right?”

me: “Files that are checked into the Source Code Repository with the rest of the code, and subject to the same Code Review discipline and policies?”

them: “Yes, of course!”

me: “No, I’m not ‘OK’ with that. I think we’re back to the ‘All our code implementation languages are unreadable and unmaintainable!’ argument.”

them: “But non-technical users need to be able to read and maintain those files!”

me: “And they’re going to use your source control tool (like ‘git’) and go through the ‘pull request’ and code review processes?”

them: “Why are you *creating all these problems on us?!?* You’re supposed to *help* us!”

me: “I am helping you. I intend to save you considerable time and money. So, back to the question: What makes you think that ‘external’ configuration files, which are actually included in and maintained as ‘source’ files in your ‘system configuration’ are desirable?”

them: “Because then the configuration data will be in well-structured ‘text’ files, which are inherently more readable and maintainable than ‘program code.’ Of course.”

me: “All your ‘source code’ files are also ‘well-structured text files.’ At least, if you’re programmers wrote ‘good code,’ they are.”

them: “But that’s different! Entirely different!”

me: “How is it different?”

them: “Because ‘programmer source code’ is *hard to understand* and ‘user structured text files’ are *easy*!”

me: “Have you considered implementing a ‘Domain Specific Language’ for configuration, within your favored implementation language(s)?”

them: “But that’s beyond the ability of our coders! And it doesn’t work well in any of our officially accepted standard implementation languages! And even if we did, they’d certainly complicate the heck out of it, making it useless!”

me: “It seems to me that you need to hire better developers. And/or train them better. And you need to choose implementation languages and tools that actually meet your needs.”

_______________________________________________

This was inspired by the following Slack thread:

https://softwarecrafters.slack.com/archives/C0A7W90G1/p1693208364227139

My Criticism:

My Later Comments:

to


How to have *Verified* Identities on all your Social Media Accounts

April 2, 2023

With all the recent controversy over who should pay what to be marked as a “verified” identity on a social media account, and the number of cases of people paying money to receive such a mark fraudulently and incorrectly, I thought I would suggest something simple and obvious:

For all organizations that have a web site that they control, they can just list all of their “official” and/or internally verified social media accounts on their main web site.

Like if someone on some social media site is claiming to represent IBM, I would trust what’s listed on the https://www.ibm.com/ web site. And regarding current Whitehouse staff, I’d look to https://www.whitehouse.gov/ to list them.

This that you’re looking at is a trivial (and free) WordPress blog for me, Jeffrey Todd Grigg. And I’m known in various places as …

https://twitter.com/JeffGrigg1
(I joined too late to just get my name without the postfix.)

https://mastodon.social/@JeffGrigg
(Seems to be growing.)

https://www.facebook.com/JeffreyToddGrigg
(Mostly to chat with some relatives and friends. I first joined the site because it was impossible to communicate with my “Gen-X” coworkers through other means.)

https://www.instagram.com/jeffreytoddgrigg/
(I mostly ignore this. I joined to follow the site for a famous local dog in my parent’s neighborhood. Since then, the dog died.)

http://wiki.c2.com/?JeffGrigg
(A read-only historic site — where “Agile Software Development was born.”)


Refactoring is an Investment

December 16, 2018

Inspired by J.B. Rainsberger asking, “What are the values of software?” in his talk, “The Economics of Software Design” at https://dev.tube/video/TQ9rng6YFeY , starting around 4:30 where he talks about how to answer this manager’s question:

“My manager gets angry when I refactor.
What should I do?”

And JBR says that you should say…

“I’m just trying to reduce volatility in the marginal cost of features.”

I don’t think this works for me.  My problem is that it’s been my experience that it’s typical for the team or line managers I’m working with to insist that this is my problem, as a professional software developer — that as a professional, I should eat the cost or use professional techniques to make the volatility problem go away, so that he doesn’t have to deal with it.  “That’s what I hired you for — to make good estimates.  ‘volatility’ issues are your problem.”

So I find it more helpful to say, quite honestly, that…

The reason I’m doing refactoring is to reduce costs.

Now this is simple, direct, honest, and easy to understand.  And reducing costs is something that they want.  But they don’t believe me, because refactoring clearly requires time (and hence costs money) to accomplish, so it “can’t possibly reduce costs.”

Actually, it can.  Because refactoring improves the structure of the code without changing its functionality.  And this means that refactoring reduces the maintenance costs of the code, in the future.

“But I don’t want you making ‘investments’!”, says the boss, “I want you pounding out code!”

That may sound like a rational and reasonable objection.  But if my boss makes it, then I have to conclude that therefore, we should not be developing software.  We may as well cancel the project and walk away.  Because software development is always inherently a process of making an investment now, by writing the software, in hopes of a return on investment in the future.  If that’s not what we want, then clearly, writing software is the wrong thing for us to be doing.

What we need to be talking about is the rate of investment that we should be doing at each stage of the project.  Near the start, we should be investing more heavily, so that we can reap the rewards of that investment through the largest portion of the project.  Near each major deliverable, we should reduce risk and investment, for the benefit of the short-term goal.

Every working day, software developers like me make hundreds of decisions, to invest in improving the quality of the code, or to mortgage the future, for a short quick burst of productivity.

We often do ourselves no favor by saying that “refactoring reduces costs long-term, by making an investment, short-term,” because it’s easy for our boss to assume that “long term” means years, when, in fact, many refactorings provide immediate benefits that repay the costs within a very short time.  I’m often looking at refactoring changes, or improvements, that have pay-off times best measured in hours, days, or weeks.  Only the more ambitious process and tooling changes may require some number of months to recover the investment.

And nearly every investment you make that improves your productivity on the project, continues to provide benefits for the lifetime of the system.

I find this incremental decision making process more useful than the “do it well or do it poorly” choice that JBR offers the project manager, starting just after 12:00 in the video:  For the same reason you can’t shift an organization directly from yearly releases to weekly releases, you also can’t shift an organization directly from the “bad curve” to the “good curve” — you have to make incremental change, to be successful.

I proposed more realistic formulas, from the COCOMO estimation model that a realistic range of costs to develop the software on a project would be from 3.2 * KLoC ^ 1.05 on the low end, and 2.8 * KLoC ^ 1.20 on the high end.  As JBR explained in his video, the “good curve” starts “higher” (more costly), but do to the “gentler slope of the 1.05 exponent, we’ll find that the “bad curve” of the 1.20 exponent, while it starts lower, it quickly goes much higher, and the overall cost quickly becomes overwhelmingly more expensive.

My objective is to move the exponent from 1.20 down towards 1.05, in incremental steps, so that we will have sufficient resources to be able to produce half way decent software.  This will also increase up-front costs, raising the constant factor from 2.8 up to something closer to 3.2.  But this has little effect, relative to the compounding benefit of the process improvements.


What is the Value of Software?

December 16, 2018

I was inspired by J.B. Rainsberger asking, “What are the values of software?” in his talk, “The Economics of Software Design” at https://dev.tube/video/TQ9rng6YFeY , where he suggested that three of the values of software (out of “7 or 8 million”) are…

  1. Features
  2. Design
  3. Feedback

He goes into detail as to the value of each, but I kept thinking that…

The value of software is automation.

The reason we write software is to automate (business) processes.  That’s what computers can do.  That’s what software does.

Automating processes can be a good thing because it can…

  • increase speed / performance of the operations
  • improve consistency, including enforcing rules
  • reduce cost and waste

But we do have to recognize the trade-offs that exist:  Nothing in the world is “completely free.”  When you automate a process, you have to deal with…

  • loss of flexibility — ad-hoc manual processes are quite flexible
  • the required up-front investment to understand the process and automate it
  • the related ongoing operation and maintenance costs.
    • maintenance is often the cost to achieve or regain some of that flexibility that you need. 

So what about the Values of Software?

Features are good.

But features are not business value.  Features that are not useful to the business users have no business value.  Features that they don’t know about or don’t use have no business value.  Features that they’re afraid to use have no business value.

Only features that contribute to the successful achievement of useful business process have business value.  Software features that contribute to increased sales, reduced costs, and/or contributing factors like brand recognition, customer and employee satisfaction, product function, etc. have business value.

Is it true that 45% to 65% or more of features are unused?

Probably.  It depends.  Software built for multiple customers probably has many unused features.  Purpose-built software of a single application for a single customer may have fewer unused features.

But the more requirements are “fixed up front,” rather than added incrementally as needed, the stronger the motivation to “pile in” all possible requirements — a process that inevitably leads to “feature bloat,” with many unused features.  If you only get one chance to “state all the business requirements” up-front, and it will be very difficult to add or change anything later, then most people will quite sensibly try to “throw in” practically every possible thing they can think of that may have value.  What would be their motivation to limit themselves to only the items with the highest overall payoff — particularly when “someone else is paying the bill”?

Also, the normal inevitable change of any business, typically around 1% change in overall “business requirements” every month, according to some sources, will inevitably “leave behind” some functionality that had business value at some time, but is no longer of use.

So it’s probably best to develop and deliver features that have high business value frequently.  The feedback you get from delivering usable features that deliver business value can help prioritize subsequent work and focus it on delivering the most highly valued features first.

Design is good.  Good design is good.

But a great design of a useless thing does not save it from being a useless thing.

Good design makes it more likely that the software will successfully implement the intended requirements (both functional and non-functional).

Good design typically also reduces the cost of future changes.  In the best case, this might be “insightful” design — that anticipates likely future changes successfully.  It would be a waste of money to invest and build for requirements that won’t be implemented until some time in the future.  But allowing flexibility for future changes instead of inhibiting them can be cost effective.

There is always a risk that some entirely unexpected future requirement or opportunity will be entirely incompatible with the abstractions we used to build our current “good design.”  With experience and insight as to a very rough range of plausible future changes, we can mitigate this risk.  But it can never be completely eliminated.

Feedback is good.

I don’t really see software as providing feedback directly, itself.  But deploying working software for real use certainly enables receiving much more useful and realistic feedback than having even highly knowledgeable people speculate as to what might happen in various hypothetical situations.  Actual implementation in the “real world” often reveals unexpected details and difficulties that we could easily miss otherwise.  And it’s hard to ignore or deny a problem that is actually happening, as compared to ignoring or dismissing a hypothetical problem or fear that might arise in a conceptual planning session.

Regarding the other 7-8 Million Software Values…

I think I’ll leave those to another talk or article, too.    ;->


Test-Driven Development with “( Test && Commit ) || TestCodeOnly || Revert”

November 23, 2018

I learned through JB Rainsberger’s “The World’s Shortest Article on (test && commit) || revert” blog article that Kent Beck and Oddmund Strømme came up with a new workflow for developing software, inspired by the “Limbo strategy” for concurrent development with a distributed team, summarized quite succinctly as “test && commit || revert“.  A number of people have been talking about it.  Some have been trying it out to see how it works for them, rather than just engaging in armchair speculation.  And there have been some questions about how well the “test && commit || revert” process can work, given that it does not allow for the “Red” step of the traditional “Red, Green, Refactor” cycle of Test-Driven Development.

I find that adding one step, making it “( Test && Commit ) || TestCodeOnly || Revert“, enables using the well-known “Red, Green, Refactor” Test-Driven Development process, in a style that has been recommended by some for quite a few years.  Automating the process, as suggested by these expressions, would most likely improve our rigor and discipline in the TDD process.

So how does it work?

Kent’s “Test, Commit, Revert” expression, and mine, assume that software development is done in very short cycles, by writing a very small amount code, maybe just one line, “evaluating the expression,” and doing so again.

Each of the “words” in the expressions above are scripts or subroutines that do something and return a Boolean true/false result.

The “Test” action runs all the automated regression xUnit tests, and returns “true” if they all pass.  If one or more tests fail, it returns “false.”

The “Test && Commit” expression implies, due to “short circuit evaluation” of the logical “and” operation (“&&”) that the “Commit” script will only be run if “Test” returns true.

The “Commit” operation should save all the code changes you’ve made (since the last Commit), and return “true.”  This would write your changes to a shared source code repository, if you’re using one.

The next operation and script in Kent Beck’s version is “|| Revert”.  With short-circuit evaluation of this logical “or” operation, the “Revert” script will not be run unless the “Test” operation fails.

So Kent is suggesting that whenever any xUnit test fails, the system should automatically discard all the changes you’ve made to the code since the last time Test returned true.  That is, your code would revert to what it was the last time “Commit” was executed.

I happen to agree with Kent that this is fundamentally a good idea, in spite of the frustrations it may cause.  And this idea was proposed quite seriously by others quite a few years before this current conversation started.

It occurs to me that we can restore the traditional “Red, Green, Refactor” cycle, and improve the automation and rigor of recognized good Test-Driven Development process by adding only one more step to the expression — the TestCodeOnly step.

The “TestCodeOnly” step checks to see if you have changed only xUnit test code, or if your changes include the “Code Under Test.”  If you have only changed code in source directory holding the xUnit tests, then the “TestCodeOnly” step returns “true.”  But if you have changed any code at all in the “Code Under Test,” it returns “false.”  The “Code Under Test” is the “production code” that you intend to deliver in the working system.  The xUnit test code should be kept separate and not deployed into production.  This separation of “test” and “production” code is a widely used “best practice” that ensures a smaller, simpler, more well-focused production code release by avoiding the distraction and risk test-only code being deployed into production.

“( Test && Commit ) || TestCodeOnly || Revert”

So this is how this process would generally work:

You need to start with an environment where the xUnit tests pass.  For the first step in an empty environment, you’d write a single test that does nothing by pass.  Then evaluating the “( Test && Commit ) …” expression will get “true” for the “Test” running step, and “Commit” your code changes.

Write a very simple test, in typical TDD style, and evaluate the “( Test && Commit ) || TestCodeOnly || Revert” expression.  Because we have not written code to make the test pass, we should expect the test to fail, so “Test” will return “false.”  This skips the “Commit” operation and checks “TestCodeOnly.”  The “TestCodeOnly” step should return “true,” as we only changed test code, and this prevents the execution of the “Revert” step, which would throw away our changes.

This is the “Red” step of the “Red, Green, Refactor” cycle.  We have one failing test.

Now, as typical of TDD style, we write the minimum amount of code needed to make the test pass.  Then we evaluate the “( Test && Commit ) || TestCodeOnly || Revert” expression again.  We certainly hope that the xUnit “Test” run succeeds, which will result in an automatic “Commit” of our changes.

But if the xUnit “Test” run still fails, then “Test” will return “false.”  Because of this failure, the system will skip the “Commit” step and evaluate the “TestCodeOnly” check.  But this will return “false,” as we did change non-test code, in an attempt to make the test pass.  So the system goes on to run the “Revert” script, which throws away the failing test that we just wrote, and our quick simple attempt to make it pass.  At this point, if we run all the tests again, we will find that they all pass.

It can be frustrating to have the system automatically discard all your changes with a “Revert.”  But doing this does encourage us to “take smaller steps” in developing software, so that we do not risk losing too much.  And this approach has been shown, in practice, to be quite effective.

Example

Let’s suppose that we would like to implement a Fibonacci function in Java using Test-Driven Development, with the “( Test && Commit ) || TestCodeOnly || Revert” expression workflow.

Let’s start with a very simple JUnit test:

import junit.framework.TestCase;

public class FibonacciTest extends TestCase {

    public void test0() {
        assertEquals(0, Maths.fib(0));
    }

}

It doesn’t compile, as there is no “Maths” class, so let’s not run the expression yet.  I’ll let my IDE help me generate the “Maths” class and the “fib” function.  And I’ll put in the simplest possible implementation that will pass the JUnit test above.  I want the test to pass, to get the “Commit” instead of a “Revert” of the code I’ve just written.

public class Maths {
    
    public static long fib(final int index) {
        return 0;
    }

}

I run the “( Test && Commit ) || TestCodeOnly || Revert” expression, and because all the JUnit tests pass, “Test” returns “true,” and “Commit” saves my changes.

Let’s add another test, in the FibonacciTest class:

    public void test1() {
        assertEquals(1, Maths.fib(1));
    }

Evaluating the “( Test && Commit ) || TestCodeOnly || Revert” expression, I find that “Test” returns “false,” because this new test fails, but “TestCodeOnly” returns “true,” because I only made changes to the FibonacciTest class.  I did not change even a single character in the “Maths” class.

Now I need to make the simplest change possible in the “fib” function that will make all of the tests pass.  And I’d better do it right, or I’ll lose my change and my new test!  So, admitting that I’m kind of evil, I “take the lazy way out” and make a one word change to this:

    public static long fib(final int index) {
        return index;
    }

Evaluating the “( Test && Commit ) || TestCodeOnly || Revert” expression, I find that the tests pass, so it commits my changes.

Here’s another test:

    public void test2() {
        assertEquals(1, Maths.fib(2));
    }

Evaluating the “( Test && Commit ) || TestCodeOnly || Revert” expression, I see that the new test fails, but the “TestCodeOnly” check prevents the “Revert.”

I’ll fix it with something that returns the two possible values:

    public static long fib(final int index) {
        return (index == 0) ? 0 : 1;
    }

Evaluating the “( Test && Commit ) || …” commits.

Add a test:

    public void test3() {
        assertEquals(2, Maths.fib(3));
    }

This test fails, but does not cause a Revert.

I’ll do something crazy to make it pass:

    public static long fib(final int index) {
        return index + ((index < 2) ? 0 : -1);
    }

I don’t like how this code “looks,” but it does pass the tests, so I get a Commit.  So this would probably be a good time to refactor.  Something closer to the definition of the Fibonacci sequence would probably be easier to read and maintain.  So I’ll add this:

    public static long fib(final int index) {
        switch (index) {
            case 0:
                return 0;
            case 1:
                return 1;
        }
        return index + ((index < 2) ? 0 : -1);
    }

This passes and commits, so I’ll simplify it to this:

    public static long fib(final int index) {
        switch (index) {
            case 0:
                return 0;
            case 1:
                return 1;
            default:
                return index - 1;
        }
    }

I’m going in the direction of making it recursive, for the “default” case, but let’s add a test to justify adding that complexity:

    public void test4() {
        assertEquals(3, Maths.fib(4));
    }

Oh; it passes and commits.

Let’s try adding another test:

    public void test5() {
        assertEquals(5, Maths.fib(5));
    }

This test fails.

I still want to cheat a bit, like I did before:

    public static long fib(final int index) {
        switch (index) {
            case 0:
                return 0;
            case 1:
                return 1;
            default:
                return index - ((index < 5) ? 1 : 0);
        }
    }

This passes, but has a confusing and annoyingly complex expression.  Is “return index – ((index < 5) ? 1 : 0);” more or less complex than “return fib(index – 1) + fib(index – 2);”?  I’m going to say that the latter is easier to understand, so I refactor to it:

    public static long fib(final int index) {
        switch (index) {
            case 0:
                return 0;
            case 1:
                return 1;
            default:
                return fib(index - 1) + fib(index - 2);
        }
    }

Tests still pass, so it commits.  And I have a simple and easy to understand implementation.  It’s slow, but we can deal with that with a bit more refactoring, should we decide that this is important.


Authority and Responsibility are the same thing

September 20, 2018

Inspired by Ron Jeffries‘ “Manager Responsibilities” article, I responded…

I like that you gave the word “Responsibilities” such prominence. While in some ways I have been quite critical of “management” lately, I think that my issue lies more with the question of authority and responsibility.

I think that…

Authority and Responsibility are the same thing.

And that this is important.

I think that those who try to delegate or impose responsibility on others, while denying them the necessary authority to act are frauds.

And I have seen quite a few people over the years misusing agile concepts to do that. “This is SCRUM.” they say, “And you take full responsibility for your commitment to deliver all these things by the deadline (… regardless of anything I or others do to you). This is ‘your commitment(that I’m imposing on you).” And (sometimes) then, they actively sabotage your efforts and yet still insist on holding you “responsible” for “your commitment.” I find such behavior fraudulent. You give both or you take both. You can’t give one and keep back the other.

Further…
To clarify…

If you use your authority, then you are taking responsibility.

When you intervene and tell your people to stop doing this, and to do that instead… When you tell people to stop doing things the way they think is best, and do things your way instead… Then you have used your authority to override others. And by doing so, you have taken responsibility for the results. No matter how much you might try to deny it. No matter how much you might try to appeal to “their professionalism” or “their job title” as reasons why they should give up everything and possibly “do the impossible” to “deliver on their commitments,” you have used your authority in ways that affect the outcome, and so you are responsible for that.

When I am working as a leader or manger, I strive to keep that in mind: That it’s impossible to delegate responsibility without also delegating authority. And when I use my authority, I am taking responsibility.

I have found that that works well. And I wish that others would also keep that in mind, and do the same.


RE: 20 Tips for becoming a better programmer

May 16, 2013

Regarding “20 Tips for becoming a better programmer”…
http://alfasin.com/20-tips-for-becoming-a-better-programmer/?goback=%2Egde_70526_member_238774726

Yes, these are some good rules. And an expert programmer not only knows the rules, but also knows when to violate them. Because the rules often conflict. And to achieve the best result often involves trade-offs and compromises.

“1. There should be only ONE single exit point to each method (use if-else whenever needed).”
This is a great structured programming rule. And in the time when individual functions often spanned multiple printed pages, it was a practical necessity – for those who wished to preserve their sanity.

But even and especially with excessively large functions, “guard clauses” at the top of a function that exit quickly when the input parameters are bad or have extreme “special case” values makes sense.

And in these days of object-oriented programming, methods with more than a few dozen lines are questionable. With short methods or methods with an extremely simple repeating structure (switch-case), it can make a lot of sense to have multiple exit points (return statements).

“2. When using if-else, make sure to place the SHORTER code snippet in the if:”
But code that says

If not X then
Do A.
Else
Do B.
EndIf

Will confuse people and rot their brains. When does it “Do B.”? It does B when “not not X”, of course! “Ahhhhhhh! My brain hurts!!!” say most maintenance programmers.

It’s generally a bad idea to use “negative logic” in an if statement that has an else clause. It will confuse people.

“3. Do NOT throw exceptions if you can avoid it, it makes your code MUCH slower, if you feel like throwing something and then catching it – go play ball with your dog. If you don’t have a dog get one – they’re awesome!”
First, get a cat. Cats are way more cool. ^-^

Yes, what he says is true. There is a time and a place for exceptions. They are for “exceptional” conditions – things that have gone wrong. One hopes that this does not happen very often. Exceptions should *NOT* be used for flow control in business logic.

And exceptions are just about the only way to deal with some circumstances. Such cases may be an indication of bad design in the framework you’re using. Exceptions are just about the only way to stop processing in a SAX parser when you find that there is no good reason to read and process the rest of the file.

“4. Do NOT try to do many things on the same line – when you’ll get an error – it will be harder to debug, example how to NOT write your code:”
Generally true – particularly with complex expressions and function/method calls.

“5. Look for code pieces that look the same and if you find any – REFACTOR your own code!”
True.

“6. Proper names are a MUST. If you’re not sure, meditate on it for another minute. Still not sure? ask your colleagues for their opinion.”
(Honestly, I don’t know what he’s talking about here. “Proper names?” Like “Jeff Grigg”?)

[Edit:  Oh! So he means “meaningful names” (for classes, methods, variables, etc.).  Why yes, of course.  (And thanks to https://thecaptainnemo.wordpress.com/ for the clarification.)]

“7. Whenever you can use HashMap instead of List/Queue – use it!”
And something that I see faaaaaaaaaaaaaaaaaaaaar more often:
Would you people please ***STOP*** using ArrayList when what you really need is a Set?!? The List interface really should not include the “contains” method. If you’re using the “contains” method on List objects, you’re almost certainly doing it wrong. Try again. :-[

“8. If you can use caching instead of I/O (usually DB) – use caching”
Caching is a highly valuable tool. But it’s not a silver bullet. Be aware of its uses and limitations. There are times when you must not use it. An expert will recognize these situaitons.

“9. If you can parse it using regex – use regex”
Generally, people should use regex a lot more than they do. But as with any tool, it can be abused. (See rule #10, below. ;-)

“10. Do not parse HTML using regex”
Regex may be involved. But honestly, there are a crazy number of most excellent XML parsers available for free in Java.

A competent craftsman has good tools and uses them with skill.

“11. Print to Log.”
True. Beginners and hackers dump text all over the place with System.out.println statements scattered through the code.

“12. Use stackoverflow – not only for asking questions! take a few minutes, every day, and try to answer questions – you’ll be surprised how much you’ll learn from it!”
True. And others. Be an active part of your community. It will do you and others a world of good.

“13. A rule of thumb: if your method is over 50 lines – split it. If your class is over 500 lines – split it. If you think you can’t split it – you’re doing something wrong.”
True.
Double-plus true. ;->

‘14. Writing a code which is “self explanatory” is great but also very hard, if you’re not sure how “obvious” your code is – use code-comments.’
“15. When writing code comments always assume that the reader doesn’t know what you’re trying to do. Be patient & explain. No one is going to *bug* you because your comment was too long…”
Comments are a code smell.
The need to write comments to explain what the code doesn’t show is generally an indication that your code could be improved to be more readable. And then you would not need the comments. The one clear exception to this is comments that explain why the code that does what it does. The code itself should clearly express what it does and how. “Why” comments are very useful.

And having said that, I have to agree that comments are often the best and quickest way to add meaning to the code. One doesn’t always have sufficient time or inspiration needed to make the code 100% clear.

“16. You want to improve? read books, blogs, technical online newspapers join relevant groups on Linkedin, update yourself with the latest technologies, go to conferences, got the point?”
17. Practice makes perfect: solve code-challenges, join hackathons, go to meetups etc
…and insightful wise postings like this one. ;->

“18. Choose one IDE and study it carefully, make sure you know the major features. Tune-up the keyboard shortcuts – it will make your workflow smoother.”
Strive to learn the keyboard shortcuts for things you commonly do.
(And, more generally, automate tasks that you do often!)

“19. Whenever you find a bug, before you fix it, write a unit-test that captures it. Make sure it does.”
True.

‘20. Don’t get lazy – RTFM’
‘We’ll finish with two quotes:’
“Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.”
– Brian Kernighan
“Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.”
– Rick Osborne
‘Be Sociable, Share!’

There’s a lot of wisdom in there.
(And I do know where you live. ;-)


My Experiences with Requirements Traceability

August 3, 2011

With some positive feedback on my post on the Yahoo Test-driven Development group, I thought I’d post my comments here, to make them more accessible:

— abrar arshadwrote:

Yeah I know requirements traceability has always been related to formal requirements specification and in agile with don’t have formal documentation. … For instances, how do you know where to make changes if your client wants you to change a feature which already has been developed. There is a chance that changing one feature might effect the others as well. …

I was involved in Document-Driven approaches for quite a few years before joining the XP community. “Requirements Tracability” was always a promise of the document-driven approach, used in part to justify its high cost. But honestly, I have never seen it deliver as promised:

Requirements tracability advocates say that it will show you where to make a new change, and that it will highlight conflicting requirements. I have never seen this happen in practice.

Consider this example: We have a system that is computing hourly pay in several divisions at several union plants. There is a fair amount of hard-coded conditional code. We just renegotiated the overtime rates for one of the divisions at two plants.

For requirements tracability to be useful, it has to be easier to find the original requirements for these divisions and to trace these requirements down to code than it would be to find the code directly. And to see conflicts, one would have to go from all the requirements that are relevant to the code back to the requirements documents — and then somehow figure out what to do with a whole bunch of requirements statements — to see if they conflict or overlap in any way, and how to resolve the issues.

Generally, in practice, it’s pretty easy to find the relevant code, even without any external requirements documentation. It’s easier to find the code than to trace through a tangled mess of requirement number references.

And to make the change… Add or change tests. Then change the code so that it passes the tests. If there are conflicting requirements, other tests will fail. You’ll look at the other tests and probably learn something. Sometimes it’s a technical issue, easily solved. Sometimes it is a real conflict in the business requirements. In that case, you will probably have to go back to the business requirements and maybe to the people who specify them to resolve the business issue.

So requirements tracability is not only costly and quickly out of date, it turns out to not be very useful. About the only good thing I’ve seen requirements tracability do is to serve as a checklist of all the things the system must do: When they’re all checked off, then you have reason to believe that the system does everything that’s been requested. User stories with automated acceptance tests also do this — with much higher justifiable confidence levels.


What if your bug crashes the Mars rover?

January 23, 2011

In “This Developer’s Life” podcast “1.0.3 Problems” mentioned on Scott Hanselman’s blog

Regarding the “1.0.3 Problems” comments about how bad it would be to have written “the bug” that trashed a Mars rover:  I don’t think we have to speculate too much about hypothetical situations.  We have the Hubble telescope mirror, for instance:

http://en.wikipedia.org/wiki/Hubble_Space_Telescope#Flawed_mirror

and the Mars orbiter thing too:

http://en.wikipedia.org/wiki/Mars_Climate_Orbiter#Encounter_with_Mars

“Bummer, dude” is one possible response.