Getting the Words Right
There has been some laudable progress in test-driven development (TDD), leading to what is now being touted as “behaviour-driven development” (note the English spelling). Two key proponents of this approach have been Dave Astels and Dan North, obviously among others, in setting up the BDD organization.
According to Dave’s first posting on this subject more than a year ago:
Maybe 10% of the people I talk to really understand what [TDD is] really about. Maybe only 5%. That sucks. What’s wrong? Well… one thing is that people think it’s about testing. That’s just not the case.
Sure, there are similarities, and you end up with a nice low level regression suite… but those are coincidental or happy side effects. So why have things come to this unhappy state of affairs? Why do so many not get it?
The thing about BDD is that it is not a new discipline or a radical change from earlier initiatives. It begins from the observation that test-driven design deals mostly with behavior and only in a small portion with unit tests. It extends the metaphor from development to engage the sponsor and (as I argue below) the market as well.
One of the things I find most compelling about the BDD approach is its emphasis on what sales people in the SPIN methodology have called “common language” and the domain-driven design people have called “ubiquitous language.” The notion is that all stakeholders in a project — including importantly the market, users and sponsors — need to have a common vocabulary that is simple, accurate, accessible, descriptive and consistent. In short, if such a language can be defined and used assiduiously, it becomes compelling and memorable. From the standpoint of development, this leads to consistency and clear communications, with the real side benefit of being more productive. From the standpoint of use and acceptance (“sales”), clear language leads to broader and quicker adoption.
Mindset matters. The language we use in our actual code, the language we use to describe our projects internally, the language we use to communicate the wonderful stuff we have created to the outside world, all of this matters. (Three cheers for dynamic languages and domain-specific languages – DSLs.) In fact, it matters so much, that if we are not taking the market’s viewpoint about what and how to explain this stuff we are likely producing crap that no one is interested in.
We all reflect the tools and the terminology that we use to work our way in the world. Development, testing (behaviorial design), and programming languages should all be in sync with our users’ end goals. What is wrong with users being able to read our code and understand what it is intending to do?
The BDD Web site does not yet offer any “cookbooks” for how such language is actually developed nor what specific steps need to be followed. (All practitioners would agree this is a hard process that requires focused attention.) But I think the protagonists are on to something very meaningful and real here.
Modular code development through agile dynamic languages, well-tested, and designed for clarity and purpose with all stakeholders is good code. I encourage the community to pay close attention and to get involved with BDD.
|An AI3 Jewels & Doubloon Winner|
“Hitting the 80/20 point is a very central concept” - Tim Bray
A recent InfoQ interview of Tim Bray by Obie Fernandez — entitled Tim Bray on Rails, REST, XML, Java, and More — is wide-ranging, cogent and thought-provoking. The subjects range from (naturally) XML and Java to Ruby, Rails, the semantic Web, agile programming, dynamic languages and typing, web services (WS*), you name it.
I most appreciate the down-to-earth sense of it all. The 30-min video and its transcript are well worth listening to and studying in its own right, but let me illustrate the quality of the interview by Tim’s answer to one question regarding Web services:
“So here’s the problem: we have a radically heterogeneous computer environment. There are different operating systems, different languages, different databases, different computer architectures and that’s not going to go away. The IT profession has struggled for decades, literally decades, on how to build applications to work across this heterogeneous network and accomplish useful things, and by and large have done a pretty bad job. Corba was sort of a sad story. Microsoft DCOM was understood by only 8 people in the world, and then all of a sudden about 10-12 years ago there was this application that worked across heterogeneous networks, had high performance, had extremely broad scaling, ignored networking issues apparently and worked great; that was the World Wide Web.
The world, not being stupid, said maybe there’s something we can learn from that. The thing about the web is that if you look at it, it has no object models and it has no APIs. It’s just protocols all the way down. Some of the protocols are loose and sloppy like HTML, and some of them are extremely rigorous like TCP/IP. But if you look at the stack there’s no APIs, there’s protocols all the way down. I think that the thing that you take away from that, is that that is the way to build heterogeneous network locations. A few other things that we learned from the web is that simple message exchange patterns are better; I mean HTTP has one message exchange pattern; I send you a message, you send me a message and the conversation is over. And it turns out to have incredibly good characteristics and so on.
Now, the other thing that came along around the same time was XML, and it provided a convenient lingua franca to put in the messages you’re going to send back and forth. The basic take-away is “Let’s adopt the architectural pattern of the web by specifying interfaces in terms of message exchange patterns, let’s make those message exchange patterns simple, let’s try and make statelessness possible and easy because that’s on the truth path to scaling. I think that idea has legs, it’s really the only way forward.
The fact is that 10 years from now there’s still going to be Rails apps here and Java apps there and they’re going to have to talk to each other. The only way to do that is by sending messages back and forth.
Somebody said to standardize that. And that led us down this insane trail and the destruction of WS*. If you look at WS* there are these huge universal schemas compressing thousands of pages of specifications, mostly cooked up in back rooms at IBM and Microsoft. Many of them are still unstable years into the project, and they are based on XML schema and WSDL, which are two of the ugliest, most broken and irritating specifications in the history of the universe. I just totally don’t believe you can build a generic world changing infrastructure for the whole software development ecosystem based on broken specifications at the bottom level. So those guys have gone off the rails!”
The death star WS* image, by the way, came from David Heinemeier Hansson of Rails fame and was used at the same Canadian Rails conference at whch both spoke.
XML just celebrated its 10th birthday anniversary this summer.
As Jon Bosak notes on the History of XML:
Many people know that XML grew out of the expertise of the SGML community, but few people realize even today that the whole two-year effort to develop XML was organized, led, and underwritten by Sun.
What began as a “stripped down” data-oriented SGML (driven by similar simplicity arguments that also led to HTML) has now truly become the ‘eggplant that ate Chicago.’ All of the WS* dialects, RDF, OWL, BP* (business process), etc., are now ubiquitously expressed in XML. Do you know of any serious enterprise app that today does not express its data exchange or configuration files in anything but XML?
Yet, within the last decade, there were learned fights and advocacies for such standards as ASN.1, CDF, HDF, EDI, yeech, yeech . . . . How did XML so easily win without a whimper; how did this come to pass?
That question is one of those that launched a 1000 theses.
The ubiquity of the Internet and emerging transmission speeds won earlier arguments about abstraction and date transfer effiiciency. XML looks (is!) inefficient, and adds many characters, but transmitting these longer strings is no longer a bottleneck. The simplest answer as to “why” XML won the day is that earlier limits of slow network transmission speeds, now in part being overcome through faster networks and the availablitiy of general, fast parsers, altered the winning equation. Direct, text-based expression leads to simple solutions, even though computer scientists who focused for years on optimal network transmissivity cringe. Yeoh!
In other words, initially data transfer protocols of the past couple of decades erred on the side of elegance and parsimony. Too bad that interconnection speeds (importantly translated through the sieve of what is immediately of actual interest) have bludgeoned prior sensitivities toward elegance. If it parses, do it! Happy brithday!
Just in Time for Christmas: Vista in the Crosshairs
Or, Give your computer the bird.
Computers are frustrating. Creating documents, finding files, sharing information — why do everyday things still seem so tedious and counterintuitive?
Dave Kushner interviews Blake Ross and gets a preview of his new Parakey venture in the November issue of IEEE Spectrum. Ross, a 20-yr old wunderkind and one of the driving forces behind the Firefox browser, has teamed with Joe Hewitt of Firefox and Firebug fame to create an absolutely disruptive new approach to computing. Quoting from Kushner’s article:
Just as with Firefox, Ross began this project by asking himself one simple question: What's bad about today's software? The answer . . . resided in the gap between the desktop and the Web. . . . The problem, according to Ross, is there's no simple, cohesive tool to help people store and share their creations online. Currently, the steps involved depend on the medium. If you want to upload photos, for example, you have to dump your images into one folder, then transfer them to an image-sharing site such as Flickr. The process for moving videos to YouTube or a similar site is completely different. If you want to make a personal Web page within an online community, you have to join a social network, say, MySpace or Friendster. If you intend to rant about politics or movies, you launch a blog and link up to it from your other pages. The mess of the Web, in other words, leaves you trapped in one big tangle of actions, service providers, and applications. Ross's answer is . . . Parakey, "a Web operating system that can do everything an OS can do." Translation: it makes it really easy to store your stuff and share it with the world. Most or all of Parakey will be open source, under a license similar to Firefox's.
Thus, Parakey aims to bridge the divide between desktop operating systems and the Internet, using the browser as the common user interface. Parakey will give users the ability to easily host their own Web sites via their desktop. Even though Parakey works within the browser (all leading ones are to be supported), it actually runs on the local computer. This enables developers to do many things not allowable in a traditional Web site. By the use of easily assigned “keys”, the desktop owner can also easily and simply post or allow access to content of their choosing — from documents to photos to files — to become “public” to the distribution lists associated with these keys. Remote users get issued cookies so that their access to the local resources is seamless and without friction.
Similar to the models of the Firefox plugin or Web services, the basic Parakey platform can be easily extended. Ross and Hewitt have created a programming language, JUL (for ‘Just another User interface Language’), likely similar to the Mozilla XUL, for developers to write these components and extensions. Though the launch date for Parakey is being kept under wraps, all signals point to before January. The pre-launch company site allows interested parties to enter their email address to receive formal notification of the launch.
It is rather amazing that this article came out on the same day, yesterday, as John Milan’s blog post on Elephants and Evolution – How the Landscape is Changing for Google, Microsoft, Mozilla and Adobe on Richard McManus’s Read/Write Web blog. In that post, Milan posits Mozilla as another one of the gorillas (elephants) in the room and Adobe’s Apollo project as another “under the radar” approach to the desktop/Internet browser convergence.
All of this seems rather ironic as the world (Redmond) awaits the release of the long-delayed Windows update, Vista. Even the mighty do indeed live in interesting times.