arei.net ⇒ javascript
Writing Code for Other People

Writing code is hard. Getting code to work in a repeatable, consistent fashion that addresses all of the possible use cases and edge cases takes deliberate time and fortitude. Yet getting code working is only half the battle of writing good code. See, good code not only solves the problem, it does so in a manner that other people can use your solution to not just solve a problem, but use it to understand the problem, use it to understand the solution, and use it to understand how to go beyond it. Good code works, but great code teaches.

So, with that in mind, how do we as engineers, write great code? What separates code that merely gets the job done from code that empowers people who read it? How do we transform our code from simply functional to deliberately empowering?

Below I present my ideas on what makes code approachable to others. Following these techniques will make your code more readable by others, more understandable by others, and more reusable by others. To meet these goals requires four specific areas of your attention: readability, context, understandability, and re-usability.

Readability: Write Beautiful Code

The first step in writing great code is to make it readable to others. By the very nature of using a higher order programming language, one would think that readability is solved, but a programming language alone is not enough. A programming language, like any communication system, like any language, is merely a means to express an idea. It is how we use that language and how we structure that usage that makes it beautiful or not, that gives it real understanding.

1. Indentation Conveys Hierarchy

Most modern programming languages do not require indentation. You could simply write all your code like this:

function add(x,y) {
return x + y;
}

and it would compile and execute just fine. But clearly, there is a reason why most modern programming languages do allow for liberal indentation. This code with indentation is infinitely more readable:

function add(x,y) {
    return x + y;
}

We use indentation to indicate hierarchy. This works because of how we visual scan things with our eyes. Western language readers are taught to scan from top to bottom, left to right. Indentation plays into that. As you scan down the code from top to bottom, the left to right indentation allows you to easily ascertain hierarchy within the code.

The return statement in the above code clearly is subordinate to the function definition, merely by that act of indenting it.

In fact, beyond even most computer languages, most descriptive languages like HTML, XML, CSS, JSON, etc, all support indentation to imply hierarchy. One of the hardest things of all when inheriting someone else’s code or data is it not being well formatted. Reformatting and format commands work mostly, but not always, and that is where thing can rapidly fall apart.

2. Meaningful Whitespace

Indentation is not the only way we convey meaning in our code using whitespace. Consider this code:

const FS = require('fs');
const contents = FS.readFileSync(process.argv[2], 'utf8');
let lines = contents.split(/\r\n|\n/);
lines = lines.map(line => {
    line = line.toLowerCase();
    line = line.replace(/[^\sA-Za-z0-9-]/g, '');
    line = line.replace(/\s\s|\t/g, ' ');
    return line;
});
lines = lines.map(line => line.split(/\s/));
const words = lines.reduce((words, line) => {
    line.forEach(word => {
        words[word] = words[word] + 1 || 1;
    });
    return words;
},{});
Object.keys(words).forEach(word => {
    console.log(word + ' ' + words[word]);
});

Like our indentation, this is perfectly executable code that will run and do the several “things” it is intended to do. But understanding what those “things” are is not so clear.

Code runs in an organized top to bottom fashion, and we mostly structure that code into logical steps. The code reads a file, the code breaks the file contents up by newlines, etc. These are the steps that your code performs.

If you look at this code you can even begin to see each of these steps. It might take a minute or two, but the steps are there.

Now, look at this code again:

const FS = require('fs');

const contents = FS.readFileSync(process.argv[2], 'utf8');

let lines = contents.split(/\r\n|\n/);
lines = lines.map(line => {
    line = line.toLowerCase();
    line = line.replace(/[^\sA-Za-z0-9-]/g, '');
    line = line.replace(/\s\s|\t/g, ' ');
    return line;
});
lines = lines.map(line => line.split(/\s/));

const words = lines.reduce((words, line) => {
    line.forEach(word => {
        words[word] = words[word] + 1 || 1;
    });
    return words;
},{});

Object.keys(words).forEach(word => {
    console.log(word + ' ' + words[word]);
});

The logical steps of our code are much clearer to see, much cleaner to read. The only change between the two examples is whitespace.

This process of visually splitting logical steps of code up is called meaningful whitespace. Meaningful whitespace is the art of using well placed line breaks to chunk code into discreet steps. Just like indentation, we use whitespace, new lines, to organize the logic of what our code does for other people to easily understand. And this is about more than just breaking up large chunks of text. Smaller blocks of code are easier to understand when examined than larger ones. Visually when reading this code one can easily understand that each of these steps does something in common and works together to achieve some result.

This is another reason that most programming languages ignore whitespace.

When inheriting someone else’s code it is not uncommon to find a complete lack of meaningful whitespace. The burden this places on the developer to go through the code and chunk it up correctly is significant and slows down their rate of understanding. By grouping your code into chunks that are digestible and understandable to someone else you aid them in time to understanding.

Context and Comments

Up to now we’ve talked about how to make our code more readable. Indentation and Meaningful Whitespace provides readers of your code super easy cues to help the scan and breakdown your code and this makes the code more readable. Readability is the first key to great code.

The next step to great code: context. That is, once my code can be easily read, how do I give it meaning so it can be easily understood. And the keys to understanding code is understanding the context of what any section operates within and on, and how the logic flow of any section operates.

3. Contextual Naming

Fun fact, excluding comments and data, the only things in your code that you get to name are classes and variables and functions. Everything else is a pre-defined keyword or operator of some type. This is why class and variable and function naming is so important, because it is the only part of your actual code where you can provide context to what your code does.

Consider the following code:

function x(y) {
   return y < 1 ? 0
        : y <= 2 ? 1
        : x(y - 1) + x(y - 2);
}

What this code does is not obvious by just reading the code. Providing a contextual name to the function changes everything here:

function fibonacci(y) {
   return y < 1 ? 0
        : y <= 2 ? 1
        : fibonacci(y - 1) + fibonacci(y - 2);
}

Class names, function names, variable names, these are opportunities to describe what they represent (classes), what they do (functions), and what they hold (variables). The name provides context to the role in the system.

Now, there are a few special rules around this, for sure. If iterating a number in a loop, we use i for the index. This is a well-known convention. But even this convention can stand breaking some times. If the context is more important than the convention, always go for more context.

4. Convey Logic through Comments

Class names, function names, and variables names all provide context as to their roles, but program logic is dictated by keywords and in almost all programming languages there is no way to change keywords. So how do we provide context and meaning around program logic?

That is where comments come in, and specifically inline comments within the code. I view this differently than documentation. Documentation answers the bigger question about the code purpose and how people use the code. Comments, instead help to describe the logic and the context around the logic in the code. (We will talk about documentation in greater detail in a little bit.)

Comments within your code are simple useful hints as to how the logic within the following section of code works. Have you ever read a block of code and then wondered to yourself what is going on? That is a failure of commenting. The initial author failed to recognize that another developer would understand the code, and that stems from either hubris or laziness, neither of which are particularly good habits to have as a developer.

Now, this is not an advocation for writing a comment before every line of code. I am a firm believer in writing code that is simple to follow and does not require comments to understand, but that is not always the case. And over-commenting creates a whole separate level of problem. Instead, comment where it is needed. When considering any section of code (which you nicely formatted with whitespace), ask yourself “is it readily obvious what it does?” If the answer is even slightly a no, take a second to add a comment or two of context.

This code from an earlier example is a lot more understandable for a handful of comments, and adding these comments as the logic is written takes almost nothing:

// Bring in the standard filesystem library
const FS = require('fs');

// Read the file
const contents = FS.readFileSync(process.argv[2], 'utf8');

// Split our file content into lines and
// convert each line to an array of simple words
let lines = contents.split(/\r\n|\n/);
lines = lines.map(line => {
    line = line.toLowerCase();
    line = line.trim();
    line = line.replace(/[^\sA-Za-z0-9-]/g, '');
    line = line.replace(/\s\s|\t/g, ' ');
    return line;
});
lines = lines.map(line => line.split(/\s/));

// Count the occurance of each word in all the content.
const words = lines.reduce((words, line) => {
    line.forEach(word => {
        words[word] = words[word] + 1 || 1;
    });
    return words;
},{});

// Output our word cloud and frequency 
Object.keys(words).forEach(word => {
    console.log(word + ' ' + words[word]);
});

There are two different uses of comments. First is what we covered above, to add logic understanding to your code. The second use of comments is to add documentation around how your code is used. This second type is covered in a later section, so please keep reading.

For logic comments, the general rule of thumb is to use // instead of /* */ in your code. /* */ block style comments are really more for providing documentation than doing quick one-off logic comments.

Also, when doing logic comments, do not hide them on the end of a line. Put them on their own line so that they are more visible and easier to see.

const superWeirdThing = !superWeirdThing; // DON'T COMMENT LIKE THIS

Organize Your Code

Understanding about your Code is done partially through providing context about what that code does, but equally important to understanding is organizing and structuring your code in a repeatable, understandable, and accepted fashion.

5. Organized from Top to Bottom

The first step in organizing your code is to physically lay the code out in logical sections. Almost all modern programming languages have a recommended approach to organizing your code. Different languages have different approaches to this, and even different approaches within that. JavaScript, is one of the more chaotic languages with little formal agreement on the best way to do this. However, there are still little things you can do without a formal community accepted standard. For example, import/require statements are expected to be first in the file.

The important thing here is to keep like with like. Do not put two class definitions separated by some variable assignments. Put the class definitions together. This is an implied structural organization for your code and it applies at the file level, the class level, and the function level.

A common layout pattern for a file in many languages is:

  • Require Libraries
  • Declare Constants
  • Declare Variables
  • Declare Classes
  • Declare Functions

A common layout pattern for a class in many languages is:

  • Declare Static Members
  • Declare Members
  • Declare Static Methods
  • Declare Constructor
  • Declare Methods

There is a common layout pattern for writing functions in some languages, but JavaScript is a little more lose here, so I am not going to offer a formal pattern. Instead, I will tell you how I like to structure things for a function:

  • Arguments and Argument Checking at the top
  • Declare variables as I need them, but nearer to the top if possible.

6. Separation of Concerns

In computer science the concept of Separation of Concerns (SOC) allows one to divide a computer program into logical sections based on what each section of that computer program does. In Web Development, HTML, CSS, and JavaScript is an example of Separation of Concerns where HTML is responsible for the content, CSS is responsible for how the content looks, and JavaScript is responsible for how the content behaves. Model-View-Controller (MVC) is yet another Separation of Concerns system that is often used.

But beyond a fancy CS Degree, separation of concerns means creating one thing to do that thing well without side effects. It means writing many functions that each do something well, instead of one single function that handles all the cases. It also makes more sense to break things up into smaller discreet units in terms of reusable code.

It also means letting the various pieces do exactly what they are good at and not trying to force them to do tasks that are better suited elsewhere. It is this reason that we should not write style information in JavaScript when a perfectly good CSS rule will do. CSS is optimized for dealing with these things and JavaScript is not, so why would you try to outwit it? Of course, there are always times when you have to break these rules, but they are very, very rare.

Related to SOC, in Computer Science there are three other principals to be aware of:

  • Coupling – Coupling is the measurement of how interdependent two modules or sections of code are to one another. There are a lot of types of coupling between two sections of code with a lot of fancy names and descriptions. For now it is best to just understand it this way: How dependent on A is B and vice-versa? Can A run without B? Can B run without A? Two modules are said to be Tightly Coupled, if they are very dependent on each other. Conversely, they are said to be Weakly Coupled it they are only slightly dependent on one another. Coupling in code is necessary because code executes linearly. However, designing classes and functions to be independent provides for better re-use and more understandable code. Consider some code you are looking at and that moment when you say, “Well, this calls function X, now where is function X and what does it do?” This is an example of coupling.
  • Cohesion – Is somewhat the opposite of coupling as it measures the degree to which things within a section or module of code belong together. This is usually described as having High Cohesion, meaning things within the module belong together, or Low Cohesion, meaning that they do not. In Object Oriented Programming, high cohesion of what is in any given class is a design goal. The class should do what it sets out to do and only what it sets out to do. If I have a Class called Car and inside that class there’s a function called juggle() it probably means my class has poor cohesion (and interestingly probably also implies tight coupling with something else). Side Effects from running a module or function implies poor cohesion. If function juggle() also fills the car with gasoline, either this is an amazing juggling trick, or there is a side-effect in the code.
  • Cyclomatic Complexity – Cyclomatic Complexity is the measure of how complex a section of code is, based on the number of paths through the code. That is to say, the more non-linear outcomes the code generates, the higher its Cyclomatic Complexity score. This is often judge by the number of conditional statements in your code. A function that has no conditionals, is said to have a Cyclomatic Complexity of 1. If it had a single conditional with two out comes (a typical if/else for example) it would have a Cyclomatic Complexity of 2, and so on. Cyclomatic Complexity is not bad, in and of itself, but the more complex the code, the higher the possibility of failure, and the more testing required. A function should strive for a lower complexity score if possible.

So, we talk about these three measures of code in order to reinforce the point of Separation of Concerns. A Separated module is one that is not coupled directly to another module, is cohesive in what it does, and reduces complexity.

When you find yourself in code that is hard to follow, it is almost always because it fails on one of these three factors.

One way to reduce complexity and structure your code better is to identify the cohesive sections of your code and isolate them into Logical Blocks. A Logical Block is a portion of your code that requires multiple lines to complete a single task. As we talked about earlier, breaking your code up with meaningful whitespace makes these logical blocks easier to identify. Adding comments makes them clearer in their purpose.

But a Logical Block also is a good hint as to how to abstract logic into small, more cohesive behaviors. For example, if your logical block needs to get reused, isolate it out into a new function.

Also, it is worth mentioning about Code Folding. Code Folding is a feature of most modern IDEs that allow you to hide (or fold) blocks of code. Code Folding in your IDE can be a powerful code reading tool. However, Code Folding requires certain syntax structures to work. By thinking of your code as Logical Blocks you can see where Code Folding opportunities could exist, if the logical block was in an acceptable structure.

Thinking about your code as a Logical Block allows you to more easily see other ways to do the same thing, ways that may be cleaner and more cohesive. It illustrates where there may be room for improvement in your code as well.

For example, this code is a Logical Block.

let sum = 0;
for (const num of numbers) {
    sum += num;
}
const avg = sum / numbers.length

You could change this logical block to be more readable by…

  • Abstracting it out to a function, especially if it is going to be reused.
const avg = calculateAvg(numbers);
  • Wrapping it in a IIFE (Immediately Invoked Function Expression)
const avg = (() => {
    let sum = 0;
    for (const num of numbers) {
        sum += num;
    }
    return sum / numbers.length; 
}());
  • Using a built-in library to do it simpler.
const avg = numbers.reduce((sum, num) => sum + num, 0) / numbers.length;
  • Using a third party library helper function.
const sum = _.avg(numbers);

All of these are better than the first because they create an isolated block that cleanly identifies what it does. With proper whitespacing and a comment above the block, makes this tight, readable, well reasoned code.

Code Usability

At some level, everything you see, smell, taste, touch, or interact with is a User Interface. An API, for example, is a User Interface between your backend and your frontend. This goes for the code you write as much as anything else. At some point, another person is going to read your code, run it, and even have to make sense of it. So, build your code like you would an API. Take into consideration how another person is going to interact with that code, how they are going to try and reason about it, how they are going to try and work within it.

Build everything with another user’s experience in mind.

7. Defensive Coding

Software is a piece of code that produces some sort of output, usually based on some sort of input. Now, assume someone is going to run your code at some point. Assume that that person is not going to read the documentation. What would happen? Would it work if the first argument is a null? Would it work if the second argument was a negative zero or a NaN? How your code behaves when executed under uncertain conditions is a testament to the quality of engineering employed in building it.

An approach to handling these questions is called Defensive Programming. In defensive programming you assume that users will maliciously try to cause problems with your code and therefore your code must guard against their attack. This means making the assumption that all data is tainted and must be verified to be correct.

Here’s an example:

if (this.onClose) this.onClose();

The assumptions made by this code are not defensive. The snippet assumes that onClose is a function and then executes it as a function. A corrected version of this code might look like this:

if (this.onClose && this.onClose instanceof Function) this.onClose();

But there is another assumption here as well, that executing this.onClose() will work? So even more defensive code might do it thus:

if (this.onClose && this.onClose instanceof Function) {
    try {
        this.onClose();
    }
    catch (ex) {
        ... do something about the error ...
    }
}

There is even an argument, that another defensive point might be that if this.onClose() is a long running process, it could cause your thread execution to grind to a halt and an even more defensive posture might be thus:

if (this.onClose && this.onClose instanceof Function) {
    setTimeout(()=>{
        try {
            this.onClose();
        }
        catch (ex) {
            ... do something about the error ...
        }
    },0);
}

That last form might not work for all situations, but it is extremely defensive.

The key to writing code defensively, is to checking your inputs early in your code. Make sure what you are getting is what you are expecting. If your function only works with even numbered integers, you better make sure it takes only even numbered integers.

The same is true for classes. Write your classes to assume that users are going to try to harm them and break your code. This means writing defensive methods but also, preventing unwanted or unknown class state mutation. Setters are a great way to protect your class from bad user input and should be used whenever possible instead of exposed public members. Consider this class:

class Car {
    make: 'Chevy';
    model: 'Malibu';
    mpg: 25;
    tankSizeInGallons: 14;

    toString() {
        return `The ${this.make} ${this.model} can go ${this.mpg * this.tankSizeInGallons} miles on a single tank of gas.`;
    }
}

The publicly exposed make, model, mpg, and tankSizeInGallons are all mutable without any thought for defensiveness. A malicious coder could cause quite a problem with these variables. More defensive style would be to use private variables (or Symbols) to hold the values, and provide public getter and setter functions to expose them. The setter functions, in particular, can do error checking and ensure type and values match acceptable inputs.

8. Code is a User Interface

Your code, whether a component, a utility, or an API, is going to get used by someone else. This is even more true when one considers that sharing of code online has become pervasive. And if you work in a team setting at all, this compounds the likelihood someone else will need your code. I do not care how throw away you believe your code is, assume that someone is going to use it.

To that end, and as stated before, treat your code like a User Interface. This means designing your code to be read is important, but also designing your code to be used by others is equally important. Ask yourself how another user is going to come in contact with your code. If they see the function signature, does it provide enough information as to what it does and how it works or does it require additional information? If they read your code, is it obvious how they would step through it from line to line or does it require more contextual logic?

A lot of what we have already covered in this document describes how to address writing code toward usage by others, but it bears repeating here:

  • Design your code to be read by others.
  • Highlight hierarchy with indentation.
  • Separate discreet sections of behavior with whitespace.
  • Name classes, functions, and variables contextually.
  • Provide comments around programming logic.
  • Clearly Separate your application concerns.

Following these steps is the first part of writing code like a user interface.

The second part is to always be asking yourself, how will others use my code? Challenge yourself to always be mindful of this question and your code will be the better for it.

Most professional coding work is a shared environment where everyone is always working on each other’s code; any team experience really is. As such it is critical to always be considerate of how others are going to use your code. Whenever you write a new component you should be consciously aware that someone else may very well need what you are building for their own purposes. Write your component with that reuse in mind. But also, be insatiable in your desire to make your component as accessible to another developer as possible. The more we reuse one another’s components, the better our product becomes.

9. Documentation

Finally, let us talk about documentation. To many software engineers the act of writing documentation is seen as an after thought, something to be farmed out to some loser English Literature person desperate for a job. But the reality of software engineering is that it is only 50% about writing code. The remaining part of Software Engineering is communicating with other people.

As we have stated above: someone else is going to use your code. It is inevitable. Given that, telling those users how to use your code is critical. If you fail to communicate how your code runs, you might as well not have written the code at all.

We talked about comments in a prior section. Comments are used to describe the logic your code follows. Documentation, on the other hand, describes how a user of your code interacts with the code and gets it to do what is expected.

Unlike inline comments to describe logic blocks, documentation is about describing the interfaces of your code, the places where others will interface with your code. We write documentation to tell others whom are using the code what it does and how it is used. This does not mean you document every single function and variable. Instead this means you document every public facing part of your code, every place other will interact with it.

Fortunately, in this day and age, you do not even have to lift your eyes out of your IDE to write documentation. Any modern language today has an associated documentation language built into it. Java has JavaDoc, Typescript has TSDoc, JavaScript has JSDoc, etc. These documentation languages assist you in describing your class, function, variables, every aspect of your code as much as you want.

Even better, most modern IDEs have these documentation languages built right into them and have tooling that makes writing that documentation easier.

Here’s an example of using JSDoc:

/**
 * Move the selection to the "first" section.
 * @return {void}
 */
selectFirst() {
    const first = this.sections.find(section => section.startHere);
    this.select(first || this.sections[0] || null);
},

In this example the comment block that precedes the selectFirst() function describes what it is and what it returns.

Documentation languages vary by programming language, so please read up on your particular one.

One note about documenting that to address: Your code is not self-documenting. Many, many people have said this over the years and every single one of them has been wrong. What they are really saying is their code is Readable and that is enough. In some case this is true, but mostly it is not. Please add documentation to your code about what it does and how it is used.

Denouement

So that’s it. Above we outlined nine (9) separate ways to make your code more readable, more descriptive, more defensive, and more user friendly. It’s a lot, to be sure, but each of these approaches is a small step towards the larger goal of writing better code that is more than just runnable.

We recognize that this could be seen as a lot. We all know that software is a demanding job full of demanding bosses and customers who think it is super easy. And these best practices they take additional time, they cost cycles, and adding them can seem overwhelming. However, starting small is the key here. When considered as a whole, these rules can seem daunting. But individually, they are not so rough. Try adopting one or two of these at first, then a few weeks later add another, until they are all second nature. It’s a good bet you are probably already doing one or two of these without even thinking about it. So, add a few more, then a few more, and next thing you know you will be shipping code you are proud to have others to read.

permalink: http://www.arei.net/archives/302
THE HUMAN RELATIONSHIP PACKAGE – LEARNING FROM EVENT-STREAM

Authoring and sharing a package is a relatively simple activity these days. Systems like GitHub and npm make it easy for an author to share and publish their work. This is a great thing; it is a wonderful time to be a developer and involved in a developer community.

However, there is a cost to sharing and releasing one’s code that is often unwritten and overlooked. Authors may feel like they are sharing their grand ideas with the world, but they often fail to understand the expectations that go along with publishing a package. Likewise, users think that they can just download a package in an all-you-can-eat buffet of open source software, never realizing that the very success of the package they are downloading requires them to participate in the community of that package. Both sides have responsibilities in the situation and for either side to ignore those responsibilities is a recipe for failure.

Several months ago the JavaScript community dealt with this very problem. A highly respected package author, had shared a package (event-stream) he wrote for fun and it got very popular. As is the case, popularity came with more requests for support and bug fixes. In short order requests became demands, and the author’s time and interest lagged. All the while, very few users where willing to step up and help. So when someone did step up and help, the author accepted and turned over the repository. Shortly thereafter malicious code was introduced into the package and everyone who relied on the package became an potential victim.

In many ways event-stream was a kind of “perfect storm” event that may be unlikely to occur again. Yet, the event-stream saga is a very real example of the cost of publishing code for others to use. Some would blame the author for not providing a better notification that he was handing the package off to someone else. Others would blame the users for not supporting the package either financially or through participation. Still others would point out that this is a professional community and as a professional one is required to know and understand how their software works and the systems it relies upon work. Yet, no matter how we throw the blame around, the reality is everyone paid the price for the failure, and everyone (the author, the users, and the greater community) is culpable.

A dependency, a package, is more than software; it is a relationship. Like all relationships, it comes with a set of expectations between the parties involved: party A in a relationship expects certain things of party B, and in turn party B expects certain things of party A. When either side fails to live up to those expectations that is when the relationship degrades and fails and people get frustrated, hurt, and betrayed.

Consider what one expects when downloading a software package:

  • Function: The user of a package expects that when a package is given X, it will return Y. This is the expectation by the user that a package will work and that it will meet their needs. Occasionally a package fails to work or meet the needs of the user, at which point they uninstall it and the relationship is over.
  • Constraint: The user of a package expects that a package is constrained to doing only what it is intended and described to do and nothing more. That is, the user expects a package to not do other things whether malicious or not.
  • Support: The user of a package expects the author of a package to provide some level of support. This includes well written documentation, responsive bug fixes, avenues of communication, and the addition of new features.
  • Notification: The user of a package expects to be informed when the package changes, how it changes, and why those changes were introduced. Notification allows the author to communicate with the users directly.

But what a lot of people overlook is the other side of the relationship having its own expectations:

  • Feedback: The author of a package expects that the user of a package will report issues with the package. One can never predict all of the edge cases where code may have problems. The author is dependent on the user providing the details about where those edges are and where they fail.
  • Reciprocity: The author of a package expects that the user of a package will help maintain and grow the package. This is a critical part of the open source equation: that everyone shares in the maintenance and growth of the package for the betterment of all. Without this participation most packages languish and die.
  • License: The author of a package expects that the user of the package will use that package in accordance with the licensing terms. The license is an expression of the legal wishes of the author, but the author lacks little ability to enforce such things. Instead, the author relies upon the relationship with the user to encourage these legal wishes.

When any one of these expectations from either side is broken the relationship is strained. Some times that strain can be minor and survivable, but often it can be completely destructive. In the case of the event-stream story from above, the users failed to fulfill the Reciprocity expectation which in turn caused a snowball effect causing other expectations to fail, which eventually led to critical failure of the entire package.

So how does one address these problems? How do the authors and users of a package prevent failures from occurring in the future? How does one ensure that the participants in our communities get what they need to not just survive but thrive as part of the community?

It begins by recognizing that there is still much work to be done on the systems in use and how the community uses those systems. Now, many of the tools at the disposal of package authors are geared toward providing and maintaining some of the expectations outlined above. GitHub, for example, can be said to be helpful in some of these expectations, but lacking (whether intentionally or not) in many of them. Certainly it enables the Support, Feedback, and Reciprocity expectations, but there is still much room for improvement.

While stronger more supportive systems and tools will help address some of the community problems, one must also be willing to examine and address the individual and community aspects as well. In many ways, the current situation in which the community finds itself is a function of getting better tools without considering the social engineering those tools also require. So any solution must also examine the people involved.

The authors need to to understand that releasing software is not a “fire and forget” situation and that the users have expectations of the package and its author. There must be a realization that by sharing code with the world, one is creating a community that requires nurturing to grow and thrive. When we publish our code with GitHub it is no accident that you immediately get an Issues tab and a Wiki tab. These are the author’s tools to build a community. And as the author one must be ready to accept that they are the leader of this community. That means meeting the expectations of that community to the best of your ability: maintaining the software, ensuring it is safe to use, supporting it as needed, and updating your users about changes.

However, nobody is suggesting that sharing code automatically dictates a lifetime of servitude. Authors must have the tools and community support necessary to lead within the confines of their time commitments. Part of nurturing and enabling a community is knowing how to ask for and accept help when it gets overwhelming and how to apprise your community about the situation.

Yet the author is not alone in this package relationship: the users bear an even greater obligation to the relationship. Every time a user downloads a new package and does not find some way to participate that user is helping to contribute to that package’s failure. It sounds harsh, but one cannot continue to treat open source software as a free ride and expect to suffer zero consequences.

The manner in which a user contributes back to an open source project comes down to two possible actions: time or money. A user can donate time by helping to improve and grow the project: opening detailed issues reports, submitting pull requests which fix bugs or add features, improving documentation, offering support to newer users, helping to lead the project, all of these activities take time from the user and give it to the author to help in the project. Time is the most critical thing an author needs to maintain a project. However, if time cannot be found or is prohibited or prevented by one’s employer, consider donating money or asking the company to step up and donate money. This can be tricky for some projects as questions about who gets the money, taxes, etc come into play. However, if there is an avenue to donate, please use it. If not consider asking the project to add one. Also, for some projects it is possible to buy advanced licenses or support agreements and this can be considered as method of getting money into the project.

Ultimately a package is only as strong as both the author and its users make it. It is incumbent upon both sides of the author/user relationship to commit to their expectations. Anything less is setting oneself up for failure. The event-stream incident is a lesson from which current and future packages can learn. The users share as much, maybe even more, culpability as the author does. An unwillingness to see that is an indicator of a willingness to fail again. And the big failure of event-stream is not that malicious code got injected, it is that no one will learn from it.

Supporting materials and further reading:

https://medium.com/@cnorthwood/todays-javascript-trash-fire-and-pile-on-f3efcf8ac8c7

https://www.zdnet.com/article/hacker-backdoors-popular-javascript-library-to-steal-bitcoin-funds/

https://gist.github.com/dominictarr/9fd9c1024c94592bc7268d36b8d83b3a

https://changelog.com/podcast/326

permalink: http://www.arei.net/archives/293
My Thoughts on Speaking at JSConf Last Call

Speaking at JSConf Last Call is an amazing experience, and not for the reasons you might think. While the experience is still fresh in my mind I wanted to set down my thoughts and observations. If you are finding this too long, skip to the end for my “lessons learned.”

In case you were not at JSConf Last Call here are the pertinent details… I, Glen Goodwin and Todd Gandee gave a talk together entitled “We are Hacks and have been Stealing code for Years.” The talk was about how we all steal code because it is part of the process we use to learn new things and how it is our responsibility to make sure others in our community learn this process. We were very well received mostly due to Todd and I styling the talk as a quick back and forth dialog, hitting the major points we wanted to hit along the way. The audience enjoyed our presentation style, laughing a lot, but listening when we got serious. It was an awesome audience. The video is coming soon. This was my first ever talk at a technical conference and Todd’s second (but largest) technical talk. We are both super grateful to JSConf for giving two unknowns a huge chance on a relatively unknown subject.

So that’s the back story. Now here is where I tell you about everything else. The rest of this article details how we came up with the idea, our process in creating the talk, what is was like to give the talk, and most importantly what we learned from the process.

HOW IT ALL CAME TO BE

So the story of how our talk came to be begins with a Tweet. Todd Gandee, Chris Aquino, Eric Fernberg, and I had all met at JSConf 2013. And every year since then we’ve all tried to make it back to the subsequent JSConf events. Each year when JSConf announcements are made, a tweet goes out among us asking about who is going. For JSConf Last Call this was no different and I asked everyone who was going to go?

Todd replied that he was thinking about putting a talk together.

To this I replied offering to co-speak with him or help out.

It was a pretty big move for me to offer to co-speak. I had always thought about talking at JSConf but never really felt like I had anything worthwhile to say. (See Kevin Old’s talk about Imposter Syndrome) So my desire to speak was there, but I lacked what I felt was a really big idea. But more than that, the idea of speaking scared me a whole lot; not because I am afraid to speak publicly, I have little fear of public speaking, but more because I was afraid to put myself out there. I remember agonizing over even sending my offer to co-speak or help Todd, but eventually I just decided to risk it.

Todd’s answer came back four hours later and we were on the phone talking within a day. He was in.

The good news was that Todd had a good idea for a topic: The great news was when he told me it I got excited. I knew I was excited because my brain kept coming up with new ideas, new angles, new thoughts. When I can’t shut my brain up, that’s a really good sign and the longer my brain keep spinning on a subject, the better the idea.

So we opened a google doc and started spit-balling ideas.

SUBMITTING THE TALK

There were going to be some barriers to working together on this talk; primarily distance. Todd lives in Atlanta and I live in Baltimore, about 700 miles apart. Yet we knew the internet was full of tools for collaborating and we could employ them to our advantage. Initially we started with a Google Doc into which we put our ideas. Then came Google Slides for actually building our slide deck. Eventually we turned to Google Hangouts and ScreenHero for rehearsing together, but that’s getting a bit ahead in the story.

I want to tell you that Todd and I got together every couple of days and constantly refined our idea, worked out the story we wanted to tell, built the slides, etc. I want to tell you that but it would be a complete lie. Life is hard and gets in the way a lot. We are no different, so there was a fair number of stops and starts and really long breaks.

Initially we started by coming up with the JSConf submission form answers we were going to need. After all, there was no real point in working on a talk if we didn’t get accepted. So we crafted a Mission Statement. Well, really more of a presentation abstract, but I really wanted to fit that Jerry Maguire link in there. Then we massaged the abstract into the JSConf submission form.

We had a lot of questions though… Would JSConf allow a pair speaker presentation? Could they even technically support two speakers? Was what we were proposing a good topic? We emailed our questions to various JSConf people we knew from past years. I reached out to Chris Williams, having met him a number of times in mutual local community events. Todd reached out to Derek Lindahl whom he was friendly with from prior JSConf events. We wanted to know if we even had a shot and we wanted to be clear that we were willing to work with JSConf. We didn’t want the fact that we were going to have two speakers put any additional financial burden on JSConf. We were happy to pay our own way, so JSConf didn’t need to comp us extra because of a second person.

The results of our contact with the JSConf staff met with mixed results. I knew Chris was really busy with real life things and was not surprised when he never got back to me. Todd had a little better success with Derek, but it fundamentally came down to Derek saying, “Just submit it. If it’s good it will get selected.” That quote, from Derek, by the way, is the answer to always tell yourself if you are thinking about submitting. Just stop second guessing yourself and go submit it already.

So, on the last day of the submission deadline, after going back and forth a few times on our answers, we submitted our talk idea.

Here’s our initial submission abstract:

Two intrepid developers, who met at JSConf, examine the relationship of sharing code, community, and developer growth throughout the short history of making programming more art than engineering.

In this talk, Glen Goodwin and Todd Gandee will walk back from the present day to the “ancient” past of Babbage and Lovelace discussing how the act of “creative borrowing” influences learning and understanding for computer programmers; how we learn by observing and deconstructing the work of others to make it our own. This includes an examination of past and current models used for “stealing” the (mostly) freely shared knowledge and past work of others like Github, StackOverflow, View Source, and Byte Magazine. Our talk emphasizes the importance of inclusive conferences like JSConf in the growth of junior and senior software engineers. Programmers’ tools of today illustrate the apprentice/mentor relationship more akin to the arts than engineering.

On October 20, 2015, we were officially notified of our acceptance. It was a glorious moment, getting accepted. Rachel White in her own JSConf Last Call talk said she ran around the building screaming upon being accepted… There may or may not have been some happy dance moves on my part; I admit nothing.

THE FIRST DRAFT

And then it sunk in… Now we have to write the damned thing.

Initially we started just coming up with ideas. We had the basic theme of our talk in the form of our title “We’re Hacks and We’ve been Stealing Code for Years.” Great title, but what did it really mean, what does “Stealing Code for Years” really imply. Also, we knew that this being JSConf’s swan song meant something special to us and that we really wanted to show that. In the shared Google Doc we just started throwing out ideas, snippets really, that we thought might be relevant, possibilities.

And then neither of us touched it for weeks. Like I said, life is hard and things get in the way.

Yet, the thing about an exciting idea for me, is that my brain never really lets it go away. So while neither Todd nor I talked about it or added anything new to the Google Doc, my brain was constantly spinning things around, you know, like in a background thread.

Then one day, while sitting in my favorite lunch spot, having my favorite lunch (beer), I had a moment, a vision, an inspiration. “We should totally open the talk wearing ski masks, like we’re trying to protect our identity.” So I pulled out my iPad and proceeded to write this down. I knew that in order to pull off a ski mask based gag, the dialog would need to be very quick, so I decided to approach it like a theatrical scene using a script. And once I started writing it, once I started working in the script format, the words just poured out of me. It helps that I was an English Literature major in college and writing comes very, very easy to me.

So, yes, the first ten minutes of the first draft of the script was written in a bar, on an iPad, while consuming beer.

Explains a lot.

THE SECOND DRAFT

After some initial conversation with Todd about this new script, we again stopped working on the project for a few more weeks. While the first ten minutes of the script was done, the rest wasn’t really coming to me. And what little I did add after the initial burst was a little disjointed. We had all these ideas, but we lacked an organization for the ideas.

And then inspiration struck Todd.

Todd is a very visual thinker where I am a very textual thinker. He thinks by drawing stuff out, where I’m more of a writing stuff down kind of person. They are two different approaches, complimentary at times, but in-congruent at others. So Todd was having trouble thinking about the talk because we hadn’t organized it visually; I was having trouble thinking about the talk because I couldn’t figure out where to go next. And we were both stuck.

And like I just said above, inspiration struck Todd. He called me up. “I made a Trello board to just write down all the slides we need. I need to organize how this is going to go.” So we fired up Trello and we started to work. We first outlined the major points we wanted to hit, like talking about the history of Code Stealing or the section on Community. Those became our Trello columns (Trello Boards). Then in each column, we put the specific points we wanted to make like talking about NodeSchool or Women who Code. Then we could move the Trello columns around to come up with the best way to present the story we wanted to tell, the progression from Stealing Code to Community.

I cannot stress enough how much what we did in Trello saved our talk. It was so instrumental to just organizing what we wanted to do. And once we had that, the script I had to write became super easy to do. I literally copied all the column names out of Trello into the script as section headers. Then a copied out the specific points of each section into the script. A little bit of refactoring on the introduction part, and the script pretty much seemed to write itself.

The Script was completed three days after we did our work in Trello. Well, the second draft was completed. The final draft of the script wouldn’t be done until about two days before our talk was to be given. It probably would have been tweaked right up to minutes before the talk, but we had to pull the slides out of Google Slides and into Keynote to protect against conference internet latency. That meant Todd had the latest copy so I was prevented from rewriting anymore.

TODD MAKES SLIDES

The plan was for me to write the script and Todd to work on the slides. But in order to make the slides, Todd needed a sense of where the slides would go, how they would fit into the script. So we decided that as I wrote the script I would put slide changes in as scene notes, like this:

GLEN: That’s my point… We have an entire industry of tinkers, it’s baked into what we do. [SLIDE: Tinkering, or breadboard in state of repair]

Also, since the script had a certain flow I had an idea of what slides I thought we should use. Plus I knew there were just some slides I had to have in the talk because I love the images, like this IT Crowd one… Mandatory for any talk in my opinion.

So once I really got started on the script, Todd got started building the slide deck out. He started collecting images and putting them in order. I am firmly in the camp that you should never make your audience read your slides, so we agreed to minimize that. Use the images to enhance what we are speaking about, so the content of the talk is the focus, not the images. Turns out when you throw a bunch of humorous slides and make people laugh that also kind of pulls their attention away from the content, but we’ll talk about that later.

Of course, while all this slide work was being done the script was constantly being tweaked, the slides were getting tweaked as well. I was making sure to re-read the script 3 or 4 times a day, fixing typos and refining the flow. Todd was continually adding more slides and I was continually offering more suggestions.

The entire process was very iterative. I’m not sure how much Todd started to hate me at this point because I kept changing the ground out from under him. I tried to minimize the impact of my changes, but it happens. There was also a lot of places where I would change his slides and he would change my script. We had to completely throw out the idea that while I wrote the script and he was doing the slides, neither of us owned our respective parts anymore; everything was shared.

Meanwhile, we both set out to memorize the script. Big mistake.

REHEARSING OVER THE INTERNET

See, the script I wrote was roughly twenty (20) pages of mostly rapid back and forth dialog. Neither Todd nor I have ever done any acting at all, we had zero experience memorizing lines. And let me tell you memorizing lines is incredibly hard. I am a huge live theatre fan, especially Shakespeare, and have always had a lot of respect for actors. In the weeks leading up to the conference my respect doubled, tripled, then doubled again. My hats go off to the actors that can memorize their roles in iambic pentameter.

00000001So we came to the realization that we were not going to be able to memorize our parts. Instead, we were going to have to read from the script as we walked through the slides. So we had to cut and paste each section of text from the script into the slide. Also, because of the rapid pace of our dialog we would need at least a few lines from the next slide showing on the current slide. This, it turned out, was a fair amount of work; and as we kept refining the slides going further we also had to refine the notes for the slide, and the text from the prior slide, and the text from the next slide. It was a constant battle of keeping everything aligned.

About five days before the conference we had our first “live” reading together using ScreenHero and Google Hangouts. We set some time aside on Tuesday night at 9:30 after our respective wives/partners had gone off to bed. I remember telling my partner Jennifer that I would be up “in about an hour.” I went to bed at 1:30am that night. We managed to read the script completely through exactly twice.

This is where I feel the real work began. We moved some slides around, changed some images, changed a bunch of dialog placement, and all that. Every single slide and line of dialog was tweaked and reviewed. It was constantly a work in progress and as I said above, changing slides around meant a lot of rework to get all the alignments correct. Uggh.

I think prior to our first reading that I figured we’d rehearse a couple of times, do the Google Hangout things, then maybe a few reads the day before our talk. Except it became pretty clear right from the first reading that the timing of our dialog was going to be everything. The opening bit with the masks was especially hard for me to get down because of the constant interruption nature of those first 20 lines.

After working into the wee hours on Tuesday, we decided we needed to do it again the next night.

Wednesday night at 9:30 I told my partner Jennifer once again, “This should be much shorter, we just need to read it.” I went to bed Wednesday night at 2:30am.

THE DAY BEFORE THE DAY BEFORE

On Thursday I flew down to Jacksonville. It’s a two hour flight from Baltimore, plus a few hours sitting around in the airport. I re-read the slides a dozen times. I had one particularly long speech that I just couldn’t seem to get down so I keep going over it over and over again. I’m pretty sure the couple sitting next to me on the plane thought I was some sort of crazed lunatic because I just kept staring at my iPad and mumbling to myself under my breath; and then every time the “Points for Glen” slide would come up I would throw my arms up in the air. I’m pretty sure there wasn’t a sky marshal on the flight or I would have been detained.

The preceding Monday Jan Lehnardt had contacted me about sharing a shuttle van from the airport, since he, I, and Patricia Garcia were all arriving at the airport at the same time. I had already booked a rental car, so I just invited them to join me in the drive. Best thing I ever did. Jan is a seasoned pro at talks and Patricia had just given her first one at JSConf EU. I quizzed them both mercilessly for tips and tricks and perspectives. They were both very open and sharing, despite having been awake far more hours then they should have and that it was like 2am in their local time. I appreciate their company and helpfulness.

Additionally, Jan pointed me at a blog post he did on the subject of speaking at conferences which he emailed me later. It’s a great quick read and you should totally read it. It also gave me the idea to write this up as well and share my own experiences. So much thanks to Jan and Patricia.

THE DAY BEFORE

Todd (and another friend of ours Chris) arrived the next morning. Let the dress rehearsal begin.

Remember when I said I had constantly been tinkering… It’s true. The first thing I did before we even rehearsed was drop a slide and 2 lines of dialog. We also realized (remembered actually) how shitty the hotel WiFi is. This is important because we had used Google Slides to write our presentation. The first time we ran through the Google Slides on site we realized waiting for each slide image to load wasn’t going to cut it. We ended up exporting the slides to Keynote. For the most part this was fine as most everything moved over except for one crucial part: We had done some color coding of our speaker notes to indicate text said on a prior slide, or text coming up on the next slide, or even whose line was whose. That didn’t transfer over. Todd was a trooper and reformatted all those things Friday night instead of sleeping.

We rehearsed probably eight times that day. Chris (our other friend) came in and pretended to be our audience despite the fact he had heard our talk five times already at this point. Him being there was so critical because it acclimated me to the idea of an audience. Let me focus on them instead of always looking down at the slide notes.

A large part of our practice on Friday was just getting comfortable with each other and learning the timing and the delivery of each line. We had never given a talk together, so just learning each other’s cues was really important. Creating a rapport between us was really one of the biggest success we had in our talk and that rapport was entirely due to spending a day practicing. By learning each other’s cues we also learned how to respond to each other conversationally. This turned out to be critical because I don’t think we ever said the same line the same way twice. There was a fair amount of “scripted improv”. That is to say, while we would be saying the line, we each would modify the line or the intonation or the pace when we actually said it. It made for a much more comfortable dialog.

Practice also proved out that our method of reading the slide notes would work amazingly well without the need to total memorization. Yea, I did end up memorizing a lot of my lines, but what I had a big problem was was knowing when the line was supposed to come. However, because we had a dialog between both of us, whenever Todd was speaking I could glance at the script and read my next line.

Another thing that we changed during practice was who was running the slides. Initially I ran the slides during our remote practices, but it seemed to get in the way when we were rehearsing together. We tried splitting who was in charge of what slide, but it didn’t work. So Todd just took it over and did a far better job than I had. I think he’s more used to using the clicker and was less worried about timing the slides with the dialog. I also think he really wanted to take over running from the very beginning, but I was unable to hear his desire. Sorry Todd. You did an awesome job running the slides.

GIVING THE SPEECH

Honestly, I have very little recollection of what happened during the speech.

I couldn’t tell you how Tracy introduced us, but I’m sure she did a great job. Everyone knows she’s awesome.

I know a lot of people laughed, which was so amazing. I knew we had a few jokes in there, but never expected our audience to laugh as much as they did. And the laughter started the minute we stepped on stage. I’m told we looked utterly ridiculous. I remember telling myself prior to the talk that “if we got some laughs, just wait a second or two for them to die down before continuing.” Problem was, in the intro, the laughter didn’t stop. People genuinely thought what we were doing was funny. That made the speech for me right there.

After that point, everything was coasting.

I know I made two mistakes, but they weren’t huge and I was okay with that. I remember toward the end of the talk, Todd read one of the lines I was supposed to read. So I read his. And we kept going reading the other person’s line. Nobody seemed to notice, so we went with it. I think the practice really allowed us to do that. Well, that and it was pretty close to the end.

I do remember Zahra‘s game of twenty questions, but having missed all the prior talks due to rehearsals, I had no idea why she was asking us these questions. It seemed odd, but I rolled with it. Turned out to be a lot of fun… I improv fairly well. I think Todd doesn’t do as well and was less comfortable.

AFTERSHOCKS

Let me tell you what it’s like after giving a speech…

First, everything about your talk, all that memorization that you did? all of it gone. My brain literally emptied of everything regarding the talk within a second of exiting the stage. Well, that’s not true… I know all the themes and points we hit during the talk, but the lines we memorized? Can’t think of a single one. A few have surfaced back to memory over the days since, but by and large all the memorized text is gone.

Second, I was full of energy. My body was positively charged and I just wanted to bounce around. We had given the talk, everyone laughed, it was great. I was humming.

People came up to use and said good things. Chris Williams came over and complemented the hell out of the talk. That meant so much to us that he enjoyed it. Todd and I had said that if just one person came up to us after and complimented the talk, then it was a success. That the first person to do so was Chris made it all that much better.

We shook hands and said “Thank You” a lot. We met new people, we grew our community, it was amazing.

And then the crash… I think your body is running so high building up to your talk, that when it’s all over the adrenaline goes away and all you want to do is sleep. I almost fell asleep in one of the talks after ours. Todd was in the exact same state. Power nap time. Its amazing how much your body can recover in a 30 minute nap. Try it some time. 30 minutes is the perfect nap length.

WHAT DID I LEARN?

Don’t agonize over submitting your talk, just do it. You are completely capable to give a talk and people genuinely want to hear what you have to say. Stop thinking about it and go submit a talk. Do it now, I’ll wait.

Don’t give a talk with another person unless it’s absolutely critical to do so. It’s really, really hard to do. We were very lucky in that Todd and I work amazingly well together. YMMV.

Outline before you write, or make slides, or whatever. The outline is they key to organizing your thoughts.

You don’t need a script like we had, but have solid notes that are easy to refer back to.

Don’t make your audience read your slides.  Use images to enhance what you are saying.  But, be careful of the funny image as that can cause people to not pay attention to what you are saying as well.

If your slides depend on timing, make sure your notes have the text that is immediately following your slide change so you don’t lose your place.

You can never practice too much. I think overall, Todd and I rehearsed together somewhere between fifteen and twenty times. When we were not rehearsing I was reading the slides over and over again.

Delivery is everything. The timing, the intonation, the mannerisms, they all play into the performance.

It’s a performance. I learned this from Jan’s blog post (see above) and its absolutely true. Even more so for us where we actually had a script.

If you do slides in Google Slides, export them to PowerPoint or Keynote for the actual presentation. Never ever rely on conference WiFi.

You are absolutely going to make mistakes during the talk; just roll with it. Take a second, move on. Repeat your line if you have to. Like I mentioned above, Todd said one of my lines near the end of our talk, so I said his next line, and for the last 10 or so lines of the talk, we were inverted. We never rehearsed that, so it was completely unprepared for, but we were pros at reading the script by then, so nobody noticed.

As Chris Williams told all the speakers in a conference call prior to the conference, everyone in the audience wants you succeed. They’re your peers, co-workers, friends, and family.

Thanks.

permalink: http://www.arei.net/archives/289
Annoucing npmbox

So, I have problem with npm in my current work. Basically it’s this: the systems I work on are disconnected from the internet but I still need certain node modules. So in order to get a module over to these systems I have to install it locally, zip it up and move that file. But technically, that file is an INSTALLED version, not the raw version of the install. So what I need is the raw installed files of the npm install, and then I need a way to install from those files.

I recently filed a feature request for npm describing this very request. Yet, I don’t think everyone understood the request, plus I thought it wouldn’t be that hard to actually build it. So I did.

Announcing npmbox. A command line utility add on for npm that when given an npm package

npmbox <package>

downloads the package and it dependencies and creates a .npmbox file which is a .tar.gz file of all the raw files necessary to install the package which includes the package, its dependencies, and its optional dependencies.

Taking this .npmbox file, you can then move it to your offline system and issue an npmunbox commmand against it.

npmunbox <npmbox-file>

This will take the box file, unzip it, and install its contents into the current folder.

Thus this achieves the process I am looking for, bundling a package for offline use and then installing from that bundled package.

My sincere hope is that the npm people take a look at the functionality and actually add it directly to npm. I’m not expecting they use the code, just the basic idea. I’m sure there are far better ways to achieve this from inside npm than what I am doing. So have at it.

permalink: http://www.arei.net/archives/274
Hiring Cleared JavaScript Developer, Ellicott City, MD

Generally I avoid recruiting people on behalf of my company, but they have recently asked me to hire a person to work directly for me and to hopefully even replace me in the future.  And, quite frankly, I’m tired of interviewing people brought to me by the company recruiter who have no understanding of what is really involved in writing code.  So, I wrote this way more interesting sounding job description and now I’m looking to find someone to fill it.  Are you that person? Know someone whom is that person? Let me know with a quick email sent to vstijob/at/arei/dot/net.

Are you an up and coming hot shot Web Developer who somehow ended up working on government contracts?  Are you horrified by the state of technology that the government contract you are working on has saddled you with using? Do you wish that your project would stop supporting browsers that are 11 years old and move into the future? Do you want to move beyond writing JSPs, Web Services, and XML? Are you ready to give into the dark gibbering madness that comes with embracing technologies like JavaScript, CSS3, JSON, and NodeJS? Are you ready to awesome-ify your career?

VSTI, A SAS Company, is looking to hire a Junior to Mid-Level Web Developer/Engineer and we want it to be you. Well, we want it to be you if…

  • You have a good basic understanding of JavaScript and are ready to learn a lot more.  And by ‘basic understanding’ we mean that you know what a closure is and how to write one in JavaScript, but you really want to understand why this makes JavaScript so powerful and all the really cool things you can do now that you know what a closure is.  We’re not looking for a JavaScript expert, but someone who wants to become a JavaScript expert.
  • You know more about HTML/CSS than just how to write tags and believe that 90% of HTML should be written using only DIV tags and some amazing CSS. In fact, you should be able to tell the difference between block and inline elements at a glance.
  • You find the possibility to use cutting edge technologies like NodeJS and ElasticSearch to be darkly enticing. In fact, just the act of us mentioning that you could work with those technologies makes you giggle like a mad scientist.  After the giggling has died down you decide to go read the documentation just for fun.
  • You have a serious passion for technology and want to learn more, a lot more.  Ideally, you wish you could learn everything through some sort of cybernetic implant process, but you realize that you still haven’t invented that yet and the only way to learn is to read and to get first-hand experience.

Here’s the obligatory job posting details…

  • You must be willing to work in Ellicott City, MD on a daily basis.  No telework is possible.
  • You must have an up to date TS/SCI Full Scope Polygraph clearance.
  • You must have a solid understanding of JavaScript, CSS, and HTML.
  • You must be willing to be figuratively thrown into the fire.
  • You must be passionate about learning new stuff.
  • You must desire to become an expert in Web Technologies.
  • You might also have an understanding of any of the following… Java, Groovy, Ant, Grails, more than one JavaScript frameworks (like jQuery or Prototype), Agile/Scrum/Kanban, CSS Resets, SVN, Hudson, CSS3, Rally, Apache Tomcat, or Git.
  • If you have a Github account, a blog, or an active twitter account that will help your cause greatly.
  • You might also be competent with some art software like Adobe Fireworks or GIMP, but it’s not a requirement.

In return for all of this VSTI, A SAS Company, will provide you with the following…

  • A very competitive salary.
  • Amazing benefits that you not likely to get elsewhere.
  • Small company feel, Large company resources.
  • An amazing chance to learn and grow your skills.
  • Patriotic pride in what you do (since this is a government contract after all).
  • The opportunity to be part of a team where your opinion is valued and taken into consideration.
  • The possibility of free beer if you like that sort of thing.

So, if any of this sounds cool to you then you should apply for a job.  We’re interviewing now and we’re looking to hire very quickly.

permalink: http://www.arei.net/archives/260
node-untappd v0.2.0

Just a quick note to say that last night I released to Github and NPM v0.2.0 of node-untappd, the Node.js connector to the Untappd API.  This new version supports the latest v4 of the Untappd API including OAUTH authentication.

Lots more details at Github: https://github.com/arei/node-untappd

permalink: http://www.arei.net/archives/258
Introducing functionary.js

I am pleased to release another open source project, functionary.js.

functionary.js is a simple JS polyfill and enhancement for adding additional behavior to functions.  Fundamentally, it allows developers to modify and combine functions into new functions to develop novel patterns.  It is very similar to sugar.js, prototype.js, and the jQuery Deffered object and behavior.  It is sort of like Promise/A but not really like it at all.

Functionary is a prototypical modification to the Function object.  This may cause some users considerable consternation as there is some belief that JavaScript should not be used in such a manner.  I, however, believe that JavaScript was actually designed to do this very thing and should do it albeit with great care.  That said, functionary.js never will overwrite a function of the same name on the Function Prototype.  So if Function.prototype.defer is already created by another Library (like Prototype JS) functionary.js will not overwrite it, but use it instead.

functionary.js works in both browsers and node.js implementations.  However, functionary.js does modify the global Function prototype and should be used with care and full understanding.

It breaks down into two distinct classes: Threading enhancements and combination enhancements.

For purposes of the following discussion, the following things are true:

var f = function() {};
var g = function() {};
var h = f.something(g);

In the above example, f is the bound function of something and g is the passed in function.  Understanding this will help the discussion below be more clear.

Threading Enhancements

While javascript is inherently single-threaded, it is possible to create threaded like behavior using the single thread model and eventing.  This is not a replacement for true multi-threading, but an okay approximation and the fundamental underpinning of javascript and node.js.  functionary.js provides a set of function modifiers for working within the confines of this “threading” environment to produce cleaner and expanded function behaviors with regards to threading and event behaviors.

The following functions fall into this group: defer, delay, collapse, and wait. defer and delay will be familiar to many JS developers, but collapse and wait are the really interesting additions.

collapse basically will collapse multiple executions of a function (call it f) into a single execution. So regardless of the number of times you call f() the actual execution will only happen once… the executions are collapsed into a single execution.  collapse can be used in one of two forms: Managed or Timed.  In Managed mode, the execution of a function is not actually performed until manual execution of the f.now() is called.  Thus repeated calls to f() are short circuited until f.now() is called.  In Timed mode, the execution of a function is automatically performed x milliseconds after the first call to f() and then everything is reset.  This ensures that execution happens based on some period of time.

wait is used to defer execution of one or more functions which are passed as parameters to wait.  Once all those functions have been executed, regardless of results, the bound function of wait is executed.  In the following example f.wait(g,h,i) wait allows a developer to defer execution of function f() until g, h, and i have all executed.

Combination Enhancements

The other aspect of functionary.js is to provide some simple ways to combine functions with other functions in a clean and concise manner.  Sure, it is possible in unmodified javascript to combine functions, but functionary gives you a more clear way to do this.

At the core of this group of functions is bundle which takes the bound function and combines it with the passed in function(s) to produce one single function.  Think of it as taking function f and function g and bundling them together into function h.  Calls to function h then execute f and g in that order.  While in some cases it’s easy enough to just call f and g directly, in others being able to pass a single function like h can be incredibly useful.  Consider this:

var f = function() {};
var g = function() {};
var h = f.bundle(g);
h.defer();

The last line of this code ensure that f and g run together, but after being deferred.

Other functions include chain, before, and after.  These functions are all, in essences, some form of bundle above, although with different orders of execution.

functionary.js provides two new functional wrappers, completed and failed.  completed ensures that the passed in function will only execute if the bound function returned without an exception.  failed is the converse and will ensure that the passed in function will only execute if the bound function DID return an exception.  These two functions offer a unique approach to try/catch paradigms.  Coupled with bundle, a try/catch/finally model is easily possible as shown:

var compl = function() {};
var fail = function() {};
var fin = function() {};
var f = function() {};
var h = f.completed(compl).failed(fail).bundle(fin);

Finally, functionary.js offers a wrap function akin to sugar.js and prototype.js where once can wrap the passed function around the bound function.

Documentation

Please see the documentation on github for all the details of using these functions.  Also, play with the code, it probably won’t bite.

Installation and Details

You can get functionary from github.  I encourage you to check it out, fork it, and submit changes.  The ultimately goal is to make functionary the go to polyfill/add on to JS for functions.

 Thoughts, Comments, Suggestions?

If you have thoughts, comments or suggestions, please put them on github or contact me on twitter at @areinet

permalink: http://www.arei.net/archives/254
Adding Last Beer and Last Tweet to a Website

So, I updated the site today adding two new features that I have wanted to add for a long time: Last Beer and Last Tweet.

Last Beer is a peek into the last beer that I consumed and checked into on Untappd.  Don’t know about Untapped? It’s basically foursquare for beer and if you aren’t using it, you are really missing out.  I’m a big fan and regularly use it to track my beer consumption and history.

Last Tweet is, as imagined, the last thing I tweeted, retweeted or replied on twitter about.  It’s about as basic as it comes.  I am a huge twitter fan/user and if you are not, you should be.  Twitter is where it’s at and Facebook is incredibly lame.

The really cool thing about these two features is that they are both built and running on node.js.  What’s more, Last Beer uses the Node Module node-untapped which I wrote back in march as my very first node module.

I will say that getting node.js up and running in my hosting environment was a big PITA, but only because my hosting environment in a shared hosting and running server processes is a big no no there.  I ended up having to buy a VM and run it there.  That meant a fair bit of research and what not on which service was better and cheaper and all that, but I finally settled on one in April.  Once the server was provisioned getting node.js running was trivial.  It’s really been a pleasure to work with once you figure out what you want.  Perhaps I will blog about the experience in more detail at a later time.

http://untappd.com/user/arei

permalink: http://www.arei.net/archives/247
Why Dependency Management Pisses Me Off

Yes, it’s true. Dependency Management Pisses Me Off. Jason van Zyl over at Sonatype needs to be kicked in the groin… repeatedly. (Sorry, I don’t really know Jason and it’s not nice to say such things, but I wanted to really hammer home the point. Jason, I apologize to your testicles.) Seriously, when did we become so amazingly lazy that saving a JAR file into our SVN repositories became a big deal?

Now, I don’t want you to just think I’m some raving lunatic out there on his soap box shouting into the wind despite the accuracy of this picture; I want to at least pretend that you are not going to scream “TL;DR” and actually read the damn posting, so here is why Dependency Management pisses me off. Feel free to reply back to me on Twitter (@areinet) and I will engage you in some spirited debate. And I promise I won’t hurt your testicles in the process.

1). Dependency Management adds unnecessary complexity. Are we not just talking about saving some files into our SVN repository after all? Why is that so hard? And who on earth thinks that writing and changing a pom.xml file is actually easier than this? Also, there are people who would say that you shouldn’t be committing built objects into SVN (or whatever) and that we shouldn’t waste disk space. To these people I say this: Disk space is cheap. Seriously, the going rate for a HD is around 9gb per $1 USD. I assure you, no matter how many JAR files your project needs this is incredibly cheap.

2). Dependency Management puts someone else in your build loop. When I build a project, I want to rely on as few people as possible to fuck things up. Yet Dependency Management injects a completely incalculable third party into your system, and that’s just for one dependency. Sure, that dependency is always there, but with external dependencies, your are practically begging for your build to break because John Bozo three countries away from you removed six bytes of code from that one project you were relying on. Now, of course, you shouldn’t be using LATEST in your dependencies, but I really don’t want to rely on the fact that our build people are smart enough to realize this. If I just committed the version I wanted to the repository, none of these problems happen.

3). Licenses Change. When your build person goes out there and changes a version number, do you think they actually read a license file? Let’s assume for the sake of reality that people are incredibly lazy… now, do you want to take the risk that so and so actually read the license? And that the license didn’t change? Seriously, it would take me about six seconds to change the license on some sub-project you use and then commit it. And suddenly the sub-project owns all of your IP simply because you used them. Now, the legality of that is a debate for other scholars than I, but it could certainly cause a mess. The only thing stopping a sub-project from doing this,is the hassle of suing your ass into oblivion… And we all know that people are getting more and more litigious every day.

4). The Internet is, by it’s nature, unreliable. Do you really want to rely on the fact that the internet providers upstream from you are not going to screw something up right when your absolutely must deliver build has to get run? Seriously, the more people that have input into a process, the more likely that process is to get derailed. I do not want to think that my ability to deliver is dependent on whether or not Anonymous is going to cause a world-wide outage in protest over SOPA (which sucks by the way). Sure, you can run a mirror of x repository and spend your time maintaining that as well, but wouldn’t you rather spend time, oh I don’t know, outside? With a girl? playing WOW? Doing anything else?

So the point here is this and this is the TL;DR for you lazy people as well… Dependency management adds both complexity and unpredictability to your systems and this is not a good thing. A Build process is about Rigor, and Dependency Management is antithetical to rigor. By using a Dependency Management solution you are willingly signing up for problems and extra work. Who wants that? When given the choice between that and just storing the files I need into my repository, I will choose the latter every time.

Now, I do think some dependency systems are way better than others. The node.js NPM system is amazingly clean, but it’s still begging for the problems I outline above. So, maybe not that awesome. It is easy to use though, wish Maven were half that easy.

So, that’s it. That’s why every time my coworker comes in and raves about how awesome Maven is I just point at his crotch and start laughing. I mean, really, Maven? Awesome? You got to be an idiot to think that. (My apologies to my coworkers.)

permalink: http://www.arei.net/archives/246
My Ideal Job

Lately, I’ve been asking myself if my current work role is really the best use of my talents.  But shortly into wondering about the answer to this question, I had formed an even more important question: What exactly do I consider the best use of my talents.  So, here, for better or worse, is where I think the best use of my talents lies…

First and foremost, I’m a hacker type through and through.  A work day in which I am not writing code is a terrible day for me.  A work day in which I write a little code is a terrible day for me.  A workday where I am heads down, balls to the wall buried in code and completely oblivious to the passage of time.  Ding!  Awesome day.  What’s even better about those kind of days is that when I’m in the zone like that (Interface Designers call it “Flow”) I am disgustingly prolific. I mean oodles and oodles of code being churned out.  That’s a win for not just me, but the company for which I am working.

Next, I am a very creative person.  This means I get strength and energy from creative outlets.  I am not going to be your go to guy to write that piece of software for which you have painstakingly provide pages and pages of detailed specification.  No, I’m the kind of guy you come up to and say, “Hey, I had a friggin awesome idea, can you whip something together for me to test the idea out?”  I will take your crazy ass idea and run with it.  Now, the result here can be mockups or it can be straight to code, I’m comfortable either way, although I think I’m more productive in code, but whatever.  Just give me an idea that I can contribute to and put my own spin on and I will exceed your most wild expectations.

Also, I love writing reusable components and libraries.  Recently Jacob Thornton an engineer at Twitter (@fat) shared a tweet at JSConf 2012 that I really found interesting.  He tweeted, “new interview question: you have 45 minutes to write JQuery from scratch. get as far as you can. start from wherever you’d like.”  I absolutely loved this question, not just because I think it would really separate the wheat from the chaff as it were, but also because I would love that challenge.  Ultimately @fat concluded that anyone trying to answer that question would be screwed because there’s so much depth to JQuery, but I would absolutely love to try.  I could spend a lifetime reanswering this question over and over and over, getting it more and more perfect each time.  I have been fortunate in my career that the work I am asked to do more often than not has limitations that prevent us from using certain libraries.  I get to go in, study those libraries and then recode them for my project.  It’s quite enjoyable and amazing enriching.

Finally, I love sharing my knowledge with others and learning their ideas and knowledge as well.  Mentoring to me can be quite a lot of fun and there is nothing better, IMHO, than a willing and eager student/peer that wants to learn or wants to debate.  I love that sort of thing.  The caveat, though, is that these activities must not take away from the above two activities.  I like mentoring some of the time, but when it becomes a full time job, when I start managing, that’s where I lose interest.  Surprisingly, I am extremely good at managing and have had a number of management position in the past, but ultimately, I have no interest in doing it in the future.

So, all that said, my ideal job is Hacker and Evangelist of Prototype Libraries. Now, I know that’s probably not a real job (if you think differently, please send me an email!), but that’s what I would ideally like to be doing. And it’s pretty cool that I’ve figured it out.  I now have a benchmark by which I can hold up two jobs and ask, “How much does each of these jobs approach the ideal for which I have set myself.”  That’s the job I’m more likely to take, that’s the job I want.

So, current job, how much do you think you live up to my ideal?

permalink: http://www.arei.net/archives/245
node-untappd: A NodeJS Library for using the Untappd API

Last night in anticipation of the upcoming JSConf, I released version 0.1.0 of node-untappd.  node-untappd is a NodeJS API Client for using Untappd services. My hope is that someone out there might use this to make some really cool things that they will then share with me and offer to buy me beer for my hardwork.  We’ll see how that works out.

If you are unfamiliar with Untappd and drink beer at all, you need to make yourself familiar.  Untappd is a socail tracking application for beer.  Think of it as FourSquare for beer.  Users checkin, rate, comment, track, and share beer consumptions and notes.  It’s an awesome little tool for keeping track of exactly what you are drinking, where you are drinking it, and what you thought of it.

node-untappd connect into the Untappd application by exposing the Untappd API to your Node application.  You can do all kinds of queries, checkins, comments, and the like right from within your own tool.

You can install node-untappd by using

npm install node-untappd.

Make sure to read the README.md file

To learn more about node-untappd or see the source code at the github repository: http://github.com/arei/node-untappd.

To see the API details go to: http://untappd.com/api/docs/v3.

Enjoy!

 

permalink: http://www.arei.net/archives/240
Easing NodeJS into the Future

Let me get this right out of the way. I’m a fan. NodeJS is really cool, easy to use, and feels just right the minute you try it out. I’m all in.

Now for the bad news…

There are some problems that NodeJS is facing this year. The upside though, is that these problems can be easily addressed. I only hope it is not too late… but maybe with a little effort, a little organization, and a whole lot of additional groundswell, we can propel NodeJS forward by a giant leap.

Hello, World

The very first time programmatic encounter a new user of NodeJS will have is the Hello World example. This is a universal concept across all languages, the entry point is always Hello World. Hello World is the single most simplistic concept in writing a program in any language. It is fundamentally the most basic thing you can do.

The NodeJS Hello World example:

var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');

The problem is that the NodeJS Hello World example basically says that NodeJS is all about building Web Servers and in my opinion that is way too narrow. NodeJS is absolutely awesome at the Web stack, no disagreement. Yet, I believe that NodeJS is so much more and has nearly limitless potential: today we can build CLI programs, statistical analyzers, and stand alone applications all in NodeJS and not once do we need to create a Web Server to do it. By defining our most basic of examples in the terms of a Web stack we are defining our entire ecosystem in those terms and that will continue to limit our potential. It is time for NodeJS to grow out of that perceptual constraint and I believe it starts with Hello World.

My solution, one line long:

console.log(‘Hello World’);

Once you get users in with one basic example, you move on to the next one and the next one after that. Absolutely we should be teaching our n00bs how amazingly simple it is to host your own servers in NodeJS, but NodeJS is so much more than that. We need to appeal to all the use cases, not just those who want to build servers. So show us how to do it all.

Which leads to the next point…

Documentation

One of the big problems of the documentation is that it is trying to serve two different masters and it need to separate it’s interest. On the one hand, it wants to be a teaching tool, helping new users through common problems and examples. On the other hand, it needs to be a resource document to which the more experienced users can turn. Regardless, both of these objectives is utterly necessary, but also at odds with one another. So lets split them apart but keep them cross referenced with one another. So the reference has pointers to the examples and the examples cross link to the references.

Once we have separated the concerns from one another, then the community can put some real effort into beefing the documentation up big time. With regards to the examples, we need to put some serious effort into teaching our users event driven programming and how it works, why it works, where it is good, and where it is bad. On the reference material side of the house we need to flesh stuff out: every single function, identifier and object needs to be described and commented upon. Also every callback needs to also be defined with exactly what is being passed into the callback when it is fired. I swear I spend half my time outputting arguments from various callbacks to understand what they are before I can actually use it.

There is literally thousands of examples on the Internet about how to use NodeJS and thousands of people willing to share their insights. So let us put all that collective talent to work creating an amazing system dedicated to teaching the technology we are all so passionate about. Everything from video tutorials (like that one on youtube), to cross referenced API documentation, to examples to do virtually everything we can think of, to sample applications and configurations. There needs to be a one stop shop to all things NodeJS and it’s name is not Google. I want to know how to do X and I do not want to hunt all over the place to find it. It is all about making NodeJS more approachable and easier to use.

And with that segue we move on to…

Startup

Simple NodeJS startup is wonderfully easy, just type node and go. Or you can get more fancy and supply a filename to start execution. The reality, however, is that most of us are not working in a simple world. We work in complex, custom, evolutionary, hybrid environments that defy description. I know this first hand, I’ve tried to describe them, it’s not pretty.

So we need to make starting NodeJS easier. After all, if it seems like its hard to do (whether or not it really is) we are not going to do it.

See, System Administrators today have a lot invested in their current solutions. They’ve been using Apache Web Server (for example) for almost two decades. They have put a ton of work into whatever cobbled together solution they have. Asking them to change, while great for progress, is just asking for a whole lot of argument. So why not make NodeJS fit into the architecture they already know and love? Why not provide them options and at the same time, show them how easy it is to love NodeJS.

This comes down to three different ways NodeJS needs to run:

First, some people just want to run NodeJS. We got this one covered today. It’s easy, it’s powerful and it has a lovely command line interface built in if you want it.

Second, some people want to run NodeJS from a Web container. This is basically the PHP or the JSP model where NodeJS runs behind a Web Server and the Web Server sends specific request too the NodeJS as it needs. There are dozens of protocols for doing this (CGI, FastCGI, AJP, etc) and implementing several of these should be and is pretty easy. A few github projects do this to varying levels of success. The ultimate goal though, is to bring these things right into the runtime so things are as painless as possible to setup. It could be as simple as just adding a command line switch to the NodeJS runtime to tell it that the incoming request is a CGI one or something. I am not trying to implement here, just throwing ideas out.

Finally, some users just want to run NodeJS as a robust service. For example, anyone whom wants NodeJS to be the only Web Server and does not need to rely on third part tools. To do so, NodeJS needs to ship with the code and tools necessary to working with full fledged services. Upstart and Forever are two tools to help with this, but why can we not put this technology into the runtime? This speaks to making it as easy as possible to get setup and rolling the way a user wants to get setup and rolling. And let us not forget not everything runs on Linux like it should; Windows still has its proponents and we need to be more approachable to everyone. Ideally integrated technology for keeping a server up, running, and monitored would be ideal. As more and more NodeJS users look to deploy NodeJS into production, easing this process becomes more and more critical.

And I’m Spent

So that is about all I got so far. I’m sure there are dozens and dozens of other things that could really help NodeJS grow, but to me these are the big ones. Yet, these are also the ones that I think can be fixed right now.

Ultimately, we need to make NodeJS more approachable, more understandable and more reliable. Today NodeJS is crazy popular, but I believe we are rapidly approaching a turn which can direct NodeJS’ fate for the years to come. It is the classic dilemma for any fledgling technology and the roadside is strewn with the corpses of those that have come before us. I honestly believe that this community can steer NodeJS to greatness, if it is willing to do so. This involves hard work, it involves decisive action, and most importantly, it involves foresight to see what is to come. If we, as a community can accept this role, NodeJS will explode to heights even we failed to imagine.

permalink: http://www.arei.net/archives/239
Quick Answers to a Hard UI Question

The Question

A friend of mine recently asked the Twitterverse the following question: “Can anyone recommend a good book on designing a good software UI? What works, what doesn’t, and in which situations.”

The Simple (but ultimately unhelpful) Answer

Keep reading for the real answer.

The Difficult Answer

I love it when people ask me this question, because it means that they are actively thinking about the Interface and they want to improve. I applaud their willingness to change. Unfortunately, wanting to change and reading a book (or two) will not get them the results they desire.

User Interface Design is a huge field of study, a speciality of decades (even centuries if you talk about Information Design) of research and learning.  There are undergraduate and graduate programs around the world that teach only this subject.  Succeeding in one of these programs is only the beginning.  Experience is what really counts in this field. A person, my friend or any person, is not going to learn this subject by reading one book.

It’s akin to me going to the bookstore and buying “The Complete Idiot’s Guide to Starting Your Own Business”.  Sure, this book will tell you the basics and give you some insights, but it’s only the tip of the iceberg.  Starting and running a business is an extremely complicated process and to think one book is going to get you there is woefully short sighted.

My problem with the original question is that it is naive.   It is naive to think that simply having the rules will allow you to effectively apply those rules.  It is naive to under-value practical experience in this field.  It is naive to treat an entire field of study as an after-thought.

The Real Question

The real question being asked is “Are there a couple of things that I can do to make my interfaces better?”

The answer to that is No … and Yes.

No, because as I’ve just said, there is no way to summarize quickly an entire field of study or years of experience.  Do not under-estimate just how valuable these things are to someone practicing in the field.

Yes, because I believe there are a number of tips that everyone can use to make their interfaces better from day one.  I’ll go into these really briefly, but I want to be very clear there’s a whole lot more to it, a whole lifetime of learning if you are willing to do it.  It’s an amazing, wonderful field and I thoroughly encourage everyone to study it.  Just be warned that it’s big, complex, and sometimes very unrewarding.

Remember: Interfaces are Everywhere

Anything people interact with is an interface: A Dictionary (the physical book kind) is an interface, a web site is an interface, a form you fill out for your employer is an interface, and a software API is an interface.  Each interface needs to be designed for the user.  So the next time you design something that someone else is going to use, or even for your own use, consider: how the interface works, how easy it is to use, and whether or not it meets your needs.

The Real Answer

While I encourage you to go and read the above books, if you do nothing else, keep the following rules in mind when you are designing any sort of user interface.

Consistency

The number one thing any software engineer can do to make interfaces better is to make things consistent.  I cannot begin to tell you how many interfaces I see that are inconsistent. (I’ve even made this mistake myself a number of times.)  If you do something one way in one part of your interface, always do it that way throughout the entire interface.  For example, if the Okay button is on the left and the Cancel button is on the right, do not change the order of these things somewhere else in your interface.

This also means adhering to the consistency norms defined by your Operating System or Operating Environment (a Web Browser is an Operating Environment).  Yes, you might not like it and you might think you can do it better, but the interface is NEVER (underlined and bolded) about you.  NEVER.

Details, Details, Details!

Anyone who does Interface Design should be horribly detailed oriented, almost compulsively so. Every aspect of your interface needs to be examined to ensure that the details are, going back to my previous point, consistent.  Form fields should be the same size, buttons the same size, everything aligned correctly, everything positioned perfectly, etc.  The details of the User Interface are the critical difference between good work and sloppy work.  And sloppy interfaces are bad interfaces.

Nothing annoys me more than going to another company and filling out their poorly designed forms.  (My current company is especially bad at this.)  These, as I mentioned before, are interfaces and spending a little time to make them more clean and more clear is worth its weight in gold.  I have, in the past, walked out of companies that were interviewing me merely because their HR forms sucked.

Know your Users

When beginning your design the first thing you should be asking yourself is what do you know about your users.  Shniederman and Cooper (the books above) can tell you a lot more about Actors and User Stories and all that, but it really just comes down to understanding how your users like to work, and how your interface is going to make some aspect of that easier.  So, get to know your users.  Are your users computer illiterate? If so, it’s not likely that they will understand something like Drag and Drop right away. Once you understand your users motivations and needs, then you can begin to design a system that best reflects them.

This is often very tricky because none of us have millions of dollars to do user studies, and shadowing, and user testing.  A lot of times our customers are abstract visions of customers.  That’s okay.  Just take some time to try and imagine (acting or role-playing training can be really helpful here) what those customers motivations and needs would be.  It’s not perfect, but it will do in a pinch.

Ease of Use

So it goes without saying that the easier to use an interface is, the more people will like it.  Of course, this must be tempered against the motivation and goals of the actual users.  So the real goal is that it must be easier for the users to do what their motivation and needs require.  I once read that Ease of Use can be defined as the number of mouse click or keyboard interactions required to perform some task.  It’s not a perfect measurement but keep it solidly in mind when designing.  And this segues into my next point…

People are Lazy

Assume that people are lazy and you will never be disappointed.  They want to do the least amount of effort for the greatest amount of payoff.    I call this the commitment factor: how much of my effort do I have to commit to receive the greatest payoff.  This is why the Lottery is so effective.  It does not seem to matter that statistics are stacked against the players, they still play because it’s easy to play and the potential reward is huge.

In interface design this is equally true.  The users will like any interface that makes things the easiest.  The converse of this, however, plays a valuable role as well… the users will like any interface that makes things easier, so long as they can control the results.  This means that lottery users like playing the lottery, so long as they can pick the numbers.  The more complicated the system to automatically pick the numbers, the less the users will like the results.

Ideally, the best systems are predictive systems that let the users control just the right amount of variables.  What is the right amount, well, that where iteration comes into play.  Try a low amount, try a high amount, calculate the best amount, and then keep refining.

Understand Flow

Cooper defines Flow as “When people are able to concentrate wholeheartedly on an activity, they lose awareness of peripheral problems and distractions.” (Cooper, 4th Edition, p119). We’ve all had these Flow moments: where what we are doing is so focused, so in-depth that we don’t notice external things such as what time it is, coworkers leaving for the day, or even phone calls from our significant others wondering why we are not home yet.  This is Flow, and it’s a very, very good thing.  Flow allows users to work on their specific need at an optimum level.  It is the goal of every good interface.

Poor interfaces interrupt flow with things like unnecessary dialogs, errors, hard to use process, etc.  The interface that interrupts less and is easier to use helps to encourage Flow.

As part of Flow I generally include visual flow in the discussion.  Visual flow is the ability of the human eye to find what it needs.  To this end, interfaces that focus or showcase what is most important to the user are better.  In the western world we read top to bottom, left to right, so items on the top left receive more attention than what is in the bottom right.  Keep this in mind as you build your interface.

Also, be aware that animated things attempting to engage or grab the users focus on your interface ALWAYS disrupt flow.  Use animation to enhance, never to engage.  Assume users have become oblivious to animated, blinking things and largely screen them out these days.

Design for Accessibility

One of the big failings of modern day design is that they fail to account for differences in human beings.  Some human beings cannot see, some cannot manipulate a mouse, some cannot determine the difference between red and blue.  Build for accessibility.  Be aware that some people view your site in really low resolution and that some view it in really high resolution.  Account for the differences in your fundamental design and from the beginning.  Going back and having to engineer your sight to meet section 508 standards can be extremely painful.  Do it right from the start.

One of the big things here is when sites use color to indicate differences.  Estimates seem to place color blindness in the US as 10 to 20% of the population.  Therefore using a color to indicate that some change has happened is not an acceptable solution.  When in doubt use a color change AND some other indicator (selection count, underlining, etc) to indicate response.

Feedback

Feedback is the process of responding to user behavior.  The more feedback, the more the user knows that they are doing things.  A common flaw is to do something without providing feedback to the user that something is occurring.  We see this in lots of User Interfaces because almost all User Interfaces rely on a single thread model wherein the interface rendering and response happen on the same thread as most processing.  The developer who pushes processing off into other threads (or into WebWorkers in the Web space) can respond with appropriate feedback to the user without waiting for the process to resolve.

Feedback is a key factor in responsiveness of a site and responsiveness is a key factor in a sites usability.  The more response a site appear, the more users feel like they are in control of how the system is behaving.

You Cannot Please Anyone

Just assume that no matter how great your user interface, not everyone is going to like it.  Instead, your goal should be to hit the 80% of user whom will like it.  There will always be edge users whom have different motivations and needs.  So upfront, identify all the users and determine what the 80% is that you can achieve.

Sure, it is possible to build an interface that scales to every type of user.  However, you will spend a disproportionate amount of time on the last 20% than on the middle 80%.  Think of a bell curve,and try to get the middle of that curve.

I know two sections back I just said to design for differences, but there is a separation between accessibility difference and designing for the edge users.

Learn from your Mistakes

Finally, learn from your mistakes.  I always believe that next version of your interface will be superior to the previous version, largely because you learn from the problems your users had with the current one and build a tighter interface for the next one.

As a co-worker of mine often says: It’s an Iterative Process.

Conclusion

So that’s what I have for you.  My long answer to a friends very simple question.  I hope I did not insult my friend, but the reality is that things are much more complicated than his initial question assumes.  That said, maybe my last section really answers the question he wanted answered.  Remember, User Interface Design is extraordinarily complicated.

A Final Note:  This topic does not take into account the whole Graphic Design aspect of UI design.  For that is an entirely differently field of study.

permalink: http://www.arei.net/archives/233
Tweet and Like

Today I removed comments from the site. I replaced them with tweet and like (facebook) buttons. Although I still maintain my stance that Facebook is dumb. Twitter on the otherhand, is the awesome.

So, if you see something you like here, tweet about it. Or Like it if you swing that way.

permalink: http://www.arei.net/archives/229
Developer Drift

Lately I’ve been the subject of what I call developer drift.

Developer Drift is the process by which an unchallenged developer slowly moves from one project to the next. The project may be in house or external or something completely fabricated by the developer’s mind, but it basically means a developer is less interested in the current project than the project over the horizon. It’s the “Grass is always greener” truism made concrete in software engineering.

For me, this has taken the form of the fact that our customer wants really boring user interfaces which I can crank out like they are nothing. Problem is, I almost never crank them out, because they are meaningless and I never feel challenged/creative by them. (For me challenged=creative.) So I take forever to implement them and tend to make a lot of excuses on why this is taking so long. I feel justified in why it takes so long in the fact that when I am challenged, I really do crank out the code at an extraordinary rate which borders on the obscene when compared to average developers. I’m very prolific when I want to be.

The same thing was true when I was back in college studying English Literature. I could crank out a paper that I found interesting in no time flat, but assign me something that was pedantic and I’d sooner rip my own teeth out with a spoon (“because it will hurt more”).

So the real question I’m trying to find an answer to is “How do you deal with developer drift?” How do you stop people from losing interest when they are bored because the project has become boring? A project I used to work on is suffering from this very problem… they want to keep the team together, but the more boring stuff they do, the less likely they are to be able to keep the team together? Is there a way for this project to challenge it’s developers at the same time as doing boring things? Would contests or “feats of skill” help keep things from getting stale? Or should they just accept this as the life cycle of the developer, assume that people are going to drift away, and prepare for the next generation?

You tell me… how does your project deal with developer drift?

permalink: http://www.arei.net/archives/213
Maven Vitriol

I’m an ANT guy. I’ve been using ANT since 1999. It’s amazingly powerful, amazingly useful and very flexible… and I’ve never once written my own ANT task. Just using the ANT tasks available or out there in the community I have been able to build hundreds of software projects.

Now, of course, everyone says, “Go Maven”. So I went Maven… and it sucked.

See, I have very strong feelings about Frameworks. (Not to confuse you here, but Maven, like ANT, is a Build Framework.) The problem with 99% of all Frameworks out there is that they force you into a specific way of doing something. “But that’s the whole point,” you scream at me. And I agree, that’s the point… until the moment you need to do something else.

Now, I use frameworks all the time in my software development, we all do. There’s one listed over on the right side in my links section that I actually endorse. So am I not hypocritical for deriding frameworks in one breathe while using them in another? Of course I am, but here’s where i’ll caveat it… I use frameworks that provide the least limitation on me doing new things. Maven, as an example is a rigid framework. Doing something new with it is difficult challenge. ANT on the other hand, while limiting, isn’t nearly as limited.

So here’s AREI’s Framework Measurement Testing System. First, evaluate a framework for the project you are working one, say building a Java Swing application. Next, evaluate the framework for another, completely different project you might work on in the future, say building a Website. How does the framework stack up for both things? Finally, consider what’s going to happen when you need to take the framework beyond it’s scope into new places. How accepting of that path is it?

Here’s some examples:

I had an ANT script that would append together a bunch of .JS files and then minify the entire set. Worked like a champ. The project I was on decided let’s give Maven a try instead of ANT. So I had to come up with a way to do the same thing in Maven. Two weeks of work later I ended up just calling ANT from inside of Maven.

Anyway, this post was really meant as just a simple link to an awesome blog posting that unleashes some much needed fury on Maven. But I got a little carried away in the intro.

So in summation, Maven sucks. Pick a framework that you can change. Damn the man! Save Empire!

permalink: http://www.arei.net/archives/193
Google Wave

Google announced yesterday a new offering called Google Wave.  There is an excellent article at TechChrunch that gives a complete overview.  All I can say is: THIS IS HUGE PEOPLE. The concept of Google Wave, the underlying idea is exactly the right step that the web and the internet needs to take. It’s a combination of Email, Instant Messaging, Twitter, Transparency, Collaboration, Wiki, Media sharing, and so much more.  Oh, and it’s an Open API and Open Source to boot.

If you’ve ever talked tech with me in the last three years and we’ve had the discussion about what is next in technology, then we’ve had a very similar discussion about the concepts behind Google Wave.  I’m not claiming Google has once again stolen my idea, but what I will claim is that there clearly is a need for this type of product that I saw and google saw and others saw as well.

Two things in particular I want to call your attention to that make Google Wave a huge idea:

1). Adhoc groups… What adhoc grouping really does for you is to let people create internet groups as easily as they create real world groups.  Think of it this way… when you walk into your work place break room and two other people walk in and the three of you start talking, you have created a group.  It’s fast in the real world, why can it not be like that in the internet world? For most of the internet when you want to form a group and start working together there’s a fairly large investment in creating the meta systems the group needs: setting up a mailing list, setting up forums, setting up a media store, setting up a user database, etc etc.  It’s a pain in the ass really, and it make setting up a group fairly non-trivial.  To some degree, Yahoo Groups and Google Groups automates a lot of that, but then you still have to go through the effort of getting people to join etc.

What Google Wave does is to make group creation simple and almost instantaneous.  Basically, you create a group by dragging a bunch of contacts together and off you go.  You can immediately begin discussion, expand the group, whatever.  Additionally, if you need other tools for that group like a map or a document or some other thing, you can add a widget or a robot to participate in that  group and boom, you have more functionality to fill the groups need.

2) The second feature that is huge for me is transparency.  Transparency is the notion behind Twitter in that you broadcast snippets of your activities out to the world and everyone can see what you are doing.  This is the beginning of transparency though.  To take it further you need to automate transparency such that as you do things, they automatically are published.  Of course, this brings up privacy concerns, but I think there are simple ways to solve this.

I know that Wave has some level of transparency, but it’s still to early to tell how much.  I suspect that even if this is an opt in version of transparency, like Twitter, that soon there will be robots and widgets (the extensions to Wave) that will automate a lot of this functionality.

The point is, though, that transparecny can be a huge social tool… but more importantly it could be a huge business tool.  And the company that sees this is going to get a huge edge over the company that does not.

Anyways, that’s my thoughts on Google Wave.  You should efinately check it out.

permalink: http://www.arei.net/archives/122
Abstracted Fear in Software Engineering

Why am I never amazed when a software engineer wants to add complexity?  By this I am talking about all these abstraction layers of tools and frameworks and engines.  These tools/frameworks/engines are supposedly designed to save developers time, but really just add complexity to the stack and in the end don’t end up saving time at all.   Now, instead of just knowing programming language X I need to know Programing Language X and Abstraction Layer Y. How is working in two technologies easier than working in one?

I know some will argue that “Well you only really need to work in Abstraction Layer Y.”  Except that is never just the case.  Abstraction Layer Y is great until you have to do something that is outside the basic cases of Abstraction Layer Y.  Then you have to go back to working in Programming Language X.  Suddenly you’re working in both.

Don’t believe me?  Consider the case of Google Web Toolkit (GWT)… GWT lets you write web pages in GWT’s Java Toolkit.  It removes the need to work in HTML and Javascript and CSS.  Except, well, CSS is still use to provide your sites look and feel.  Oh, and you still use HTML inside the GWT code.  Oh, and you want to do something a little more complex than what GWT offers? Looks like you’re going to have to embed Javascript into your GWT code as well. So, instead of just having to work with HTML, CSS, and Javascript, now I have to work with HTML, CSS, Javascript and GWT.

Seriously, it’s basic math.  X + Y > X where Y is greater than 0. Yet time and again engineers are all gung ho about Abstraction Layer Y and I suspect it’s for one reason: Fear.  They’re afraid that they will have to do work, they’re afraid that their incompetence will be visible, they’re afraid that their value to the company will be realized. Fear is an extremely strong emotion and it can lead to panic, chaos and distrust.  But underneath it all is pure cowardice.  Cowardice to get things done, cowardice to admit your shortcomings, and cowardice to realize your own mediocrity.

And the worst kind of cowards are the ones that want to do trade studies.  Let us spend all of the budget and time for the project on comparing Abstraction X to Abstraction Y to Abstraction Z before we choose one.  Don’t worry though, we’ll have plenty of time and money after we run out of time and money to build the software.  Shouldn’t take more than two weeks, right?  The engineers that want to do this are the ones with the most to fear.  I believe that they would rather do trade studies than write code.  If you don’t want to write code, if you don’t LOVE to write code, you’re not a software engineer… you’re middle management.  You need to realize your fate and go work for the federal government wasting tax payer resources.

So, is there ever a case where Abstraction Layer Y makes sense? Probably, but it’s far, far less frequently than one might imagine.  Sometimes an abstraction layer can actually achieve the nirvanic state that it set out to do.  It’s kind of this sweet spot between being overly simple and overly bloated.  And I suspect that of the thousands of Abstraction Layer Y’s out there, only 1 or 2 have actually succeeded in finding that spot.

And yes, this is a little bit of a rant against Abstraction Layers… but it’s not a rant against the people who write Abstraction Layers.  The people who write Abstraction Layers are freaking genius.  Basically they have identified the fears of software engineers and are exploiting it to make money or recognition or whatever their particular motivation is.  This is brilliant in my opinion and I seriously need to start working at one of these companies.

To conclude: Abstraction Layer Y sucks and is a waste of time.  Just write the code yourself.  It’s not that hard.  If you think it is that hard, you need to quit your job now and go manage the fast food restaurant you were always destined to manage.

permalink: http://www.arei.net/archives/104
Minification

There has been a great deal of discussion lately about the value of minification, yet very little concrete value. To that end I have decided to set down my thoughts from these discussions and share them such that everyone can be on the same page. Below I will outline what minification is and what it gains you. This is followed by discussion of minification versus HTTP Compression. Next, I will look at concept of minification as obfuscation and the usefulness of obfuscating in general. Finally, I will share my recommendation on the subject of Minification and which tools I would recommend.

MINIFICATION

To start, lets define what minification is: Minify or Minification is the process of taking some source file and compressing it to a smaller size by removing whitespace and comments. A number of other small rules are sometimes applied as part of the minify process, but these are less significant. For example, Minify will shorten variable names to two or three characters and thus gain some space there.

Consider this simple function which is not minified:

for (var i = 0; i <; 100 ; i++)
{
    var randomnumber = Math.floor(Math.random() * i);
    document.write("A random number between 0 and " + i +
                   " is " + randomnumber);
}

When minification is applied this function becomes thus:

for(var i=0;i<;100;i++){var randomnumber =Math.floor(Math.random()*L);document.write("A
random number between 0 and "+i+" is "+randomnumber);}

(Please note that any line breaks you see in the minified code above are due to word wrapping in your mail reader. Minified code is generally always a single line.)

In essence, minification reduces overall code size, and thus reduces the clients download time. The downside of minification is that by removing whitespace and comments and renaming some variables, the code becomes unreadable, extremely hard to debug, and may introduce unforeseen and hard to find bugs.

When we are talking about Web applications both Javascript (JS) and CSS files can be minified.

In a small test case I put two files through the minification process to see what the net gain would be. The Javascript file is of small size with few comments. The CSS is a large CSS file with few comments.

FILE SIZE                                JS                    CSS             
Bytes                                    17954                 39020
Minified Bytes                           10413                 31178
Net Gain                                 42%                   20%

As can be seen minification is much more useful to Javascript where whitespace is used much more. CSS tends to have less whitespace and thus it's gains are lesser. Also, gains will be much larger if your code is heavily commented. Since comments are removed, minification gains are directly related to the amount of comments.

So, a 42% gain seems like a lot when you are talking about really slow network connections. Yet the question is, are those gains really worth the sacrifice in the ability to debug your code? And if one does not want to sacrifice the ability to debug and read the code, what can be done instead of minification?

HTTP COMPRESSION

HTTP Compression is a standard part of the HTTP 1.1 protocol that is in use in almost all major browsers and web servers on the market today. In essence whenever an HTTP request is made by a browser, if that browser supports HTTP Compression a special parameter is added to the outgoing request that lists the different compression algorithms that the browser can uncompress. When a Web Server sees this special parameter, it checks to see if the requested file can be compressed and if the server has one of the compression algorithms that the browser has said it can use. If all this is true, the server will run the compression algorithm over the response before it is sent back to the browser. The browser receives the compressed response and make it uncompressed before making it available.

HTTP Compression has many advantages and few disadvantages. Foremost, HTTP Compression is an entirely automated process that is handled by the server and requires only a minor configuration change on most Web Servers to enable. The Browser requires no special functionality to use compression. The disadvantage of compression is that there is a minor performance hit to both the server side and the client side. The server side must spend time compressing the response. The client side must spend time uncompressing the response. Yet, in most cases the file sizes involved make the time spent in compression minimal. If files sizes were significantly larger, compression might begin to cause performance problems.

Let us consider our test files again. This time let us look at our net gain from using HTTP compression only.

FILE SIZE                                JS                    CSS             
Bytes                                    17954                 39020
Compressed Bytes                         3906                  6708
Net Gain                                 78%                   82%

Right away its apparent that HTTP compression offers significant gains. When compared to our previous results from minification, HTTP Compression is the clear favorite. The reason for these significant gains that the HTTP Compression works across all the bytes of the source files whereas Minification largely leaves the meat of the CSS and JS files alone and works mostly on the whitespace of those files.

Given the gains of HTTP Compression the next logical step is to ask, what if I did both minification and HTTP Compression. The result is even greater improvements, but not as much as one might imagine. Consider our test files again:

FILE SIZE                                JS                    CSS             
Bytes                                    17954                 39020
Compressed & Minified                    2514                  5907
Net Gain                                 86%                   85%

Overall when employing both techniques, the gain of minification is not as significant. For our JS file there is only an 8% gain over just plain HTTP Compression. There is even less gain when looking at the CSS file. However, please remember that both the JS and CSS files in our test are very light on comments. More comments will increase the gains somewhat, but less than one might think.

Given these results, the question to be asking is does the sacrifice of being able to debug and read my Javascript and CSS outweigh my need to gain an additional 8% over just using HTTP Compression.

So maybe Minification is not the way to go after all, but what about Minification as a means of protecting my code from people whom want to steal my hard work?

OBFUSCATION

In the world of interpretive computer languages such as Java, C#, Javascript, PHP, Groovy, etc there has long been the question of how one can prevent someone from reading or stealing their code. The more openly accessible the language, the easier it is to read and copy it.

Obfuscation, is the process of applying a series of heuristics to code that will make it harder to read and more confusing to understand. The intent is that by applying these rules, the code becomes such an confusing mess that nobody would bother to read the code.

Java Obfuscation is a fairly complex process. There are hundreds of heuristics that are applied to the incoming source code. The resulting end code is a nightmare of crazy classes, method and members names that make scanning the code painful.

Obfuscation for Javascript is much less powerful. Because Javascript is meant to be a widely open language, the ability to obfuscate the code is minimal. Variables, objects and methods in Javascript have no scope privacy and thus can be accessed by anyone and are often designed that way. This means that their names cannot be obfuscated because the names have relevance to the structure. You cannot change the name of a given method, because in many cases there is no way to tell whom has to call that method or where in the code that might happen.

Yet the biggest problem of all with Obfuscation is that it is almost completely worthless. Obfuscation will only stop the most basic of people from copying your code. Extracting obfuscated code is fairly simple in most interpreted languages. With many of today's sophisticated IDEs reformatting and stepping through code is fairly simple. Additionally, many IDEs support variable name matching such that when an obfuscated variable name is selected all of the matching uses are highlighted. All of this means taking obfuscated Javascript back to readable code is far easier than one might suspect.

Another big strike against obfuscation is that Javascript can be employed in very diverse and powerful ways. Closures, functions as objects, objects as associative arrays, and the like, can all lead up to some fairly complex Javascript. Often times this kind of programming can confuse an obfuscator that is not designed to handle the diversity of the language. Even when writing this email I managed to break one obfuscator with just the sample for loop code from above.

Obfuscation, at its best, serves to help keep honest people honest. It is not going to stop someone whose intent is to copy your code, it's not even going to put up a decent struggle.

Minification, while not technically an obfuscator, does provide some obfuscation like processes. The removal of whitespace and the collapsing of localized variables all makes the code harder to read. Yet, not that much is really changed otherwise. Consider our sample code block that we looked at earlier. With minification obfuscation, it is still fairly readable:

for(var L=0;L<;100;L++){var dS0=Math.floor(Math.random()*L);document.write("A random number between 0 and "+L+" is "+dS0);}

Given this code and about 2 minutes on the internet or a few search and replace calls and you can turn it back into fairly readable code as shown here:

for (var L = 0; L <; 100; L++) {
    var dS0 = Math.floor(Math.random() * L);
    document.write("A random number between 0 and" + L + "is" + dS0);
}

It's not a perfect match of our original code, but it is fairly close and takes little real work to achieve.

Ultimately, Obfuscation can have some utility in giving you a very basic first line of defense, but the ability to debug and read your code is reduced and in some instances obfuscation can add to code size, albeit a very small amount. These limitations must be carefully considered before undertaking obfuscation. What are you really trying to do when you obfuscate your code?

Unfortunately, there is no way to really protect your Javascript code on the internet. The design of HTML, Javascript, and CSS is all meant to be plain text readable formats and this means that absolutely nothing stands between the site code and the end user. The only real protection you have from people reading and copying your code is in how much the worry about your legal recourses.

CONCLUSIONS

So where does this leave us with regards to Minification?

Well, we showed that from a size compression stand point, minification has some small value, but not nearly as much value as just turning on HTTP Compression. From an obfuscation standpoint we outlined the fundamental weakness of obfuscation in general. Given these factors my recommendation is for careful consideration of whether or not minification is actually needed in the particular case you are making.

I would ask myself these questions:

1). How important is the ability to read and debug my code?

2). How much time and resources do I wish to invest in tracking down bugs in minified code?

3). How relevant is the commenting in my code to the ability to debug and read my code?

4). With Obfuscation, whom am I trying to protect my code from and how competent are they?

Ideally, I think the best application is to do no Obfuscation, HTTP Compression, and a reduced form of Minification in which only comments are removed from the code. This provides you with high compression and still maintains the ability to read and debug your code. In my opinion obfuscation really is not worth the effort to employ it for the benefit you receive from it.

CONFIGURATION NOTES

HTTP Compression can be turned on in either Tomcat of Apache Web Server with just a little bit of effort. Tomcat requires a mere 3 lines of code to get HTTP Compression working. Apache HTTP Server requires a few more lines, but it is still relatively easy. For more information about configuring tomcat see http://viralpatel.net/blogs/2008/11/enable-gzip-compression-in-tomcat.html. For more information about configuring Apache Web Server see http://httpd.apache.org/docs/2.0/mod/mod_deflate.html.

TOOLS NOTES

Finally, I will plug the YUI Compressor. There are a lot of Minification programs out there, but YUI Compressor seems to allow for the most configurability. Also, I liked YUI Compressors ability to minify both CSS and JS files within one simple program. You can find out more about YUI Compressor at http://developer.yahoo.com/yui/compressor/

permalink: http://www.arei.net/archives/97
IE Bug: Input type="checkbox" checked property

So I found a fun little bug in IE 6 just now.  Don’t know if it is in IE 7 or 8.

Essentially if you have an element as shown:

<input type="checkbox" id="test-checkbox" />

And do the call:

document.getElementById("test-checkbox").checked

The result will not always reflect the checked state of the element.  The reason for this is the lack of a “name” attribute in the input element.  IE 6 appears to reset the state of any checkbox form element that is missing a name attribute under certain conditions.  Adding the “name”  attribute solves this problem.

Spent 2 hours on that gem.

permalink: http://www.arei.net/archives/40