arei.net ⇒ nodejs
Writing Code for Other People

Writing code is hard. Getting code to work in a repeatable, consistent fashion that addresses all of the possible use cases and edge cases takes deliberate time and fortitude. Yet getting code working is only half the battle of writing good code. See, good code not only solves the problem, it does so in a manner that other people can use your solution to not just solve a problem, but use it to understand the problem, use it to understand the solution, and use it to understand how to go beyond it. Good code works, but great code teaches.

So, with that in mind, how do we as engineers, write great code? What separates code that merely gets the job done from code that empowers people who read it? How do we transform our code from simply functional to deliberately empowering?

Below I present my ideas on what makes code approachable to others. Following these techniques will make your code more readable by others, more understandable by others, and more reusable by others. To meet these goals requires four specific areas of your attention: readability, context, understandability, and re-usability.

Readability: Write Beautiful Code

The first step in writing great code is to make it readable to others. By the very nature of using a higher order programming language, one would think that readability is solved, but a programming language alone is not enough. A programming language, like any communication system, like any language, is merely a means to express an idea. It is how we use that language and how we structure that usage that makes it beautiful or not, that gives it real understanding.

1. Indentation Conveys Hierarchy

Most modern programming languages do not require indentation. You could simply write all your code like this:

function add(x,y) {
return x + y;
}

and it would compile and execute just fine. But clearly, there is a reason why most modern programming languages do allow for liberal indentation. This code with indentation is infinitely more readable:

function add(x,y) {
    return x + y;
}

We use indentation to indicate hierarchy. This works because of how we visual scan things with our eyes. Western language readers are taught to scan from top to bottom, left to right. Indentation plays into that. As you scan down the code from top to bottom, the left to right indentation allows you to easily ascertain hierarchy within the code.

The return statement in the above code clearly is subordinate to the function definition, merely by that act of indenting it.

In fact, beyond even most computer languages, most descriptive languages like HTML, XML, CSS, JSON, etc, all support indentation to imply hierarchy. One of the hardest things of all when inheriting someone else’s code or data is it not being well formatted. Reformatting and format commands work mostly, but not always, and that is where thing can rapidly fall apart.

2. Meaningful Whitespace

Indentation is not the only way we convey meaning in our code using whitespace. Consider this code:

const FS = require('fs');
const contents = FS.readFileSync(process.argv[2], 'utf8');
let lines = contents.split(/\r\n|\n/);
lines = lines.map(line => {
    line = line.toLowerCase();
    line = line.replace(/[^\sA-Za-z0-9-]/g, '');
    line = line.replace(/\s\s|\t/g, ' ');
    return line;
});
lines = lines.map(line => line.split(/\s/));
const words = lines.reduce((words, line) => {
    line.forEach(word => {
        words[word] = words[word] + 1 || 1;
    });
    return words;
},{});
Object.keys(words).forEach(word => {
    console.log(word + ' ' + words[word]);
});

Like our indentation, this is perfectly executable code that will run and do the several “things” it is intended to do. But understanding what those “things” are is not so clear.

Code runs in an organized top to bottom fashion, and we mostly structure that code into logical steps. The code reads a file, the code breaks the file contents up by newlines, etc. These are the steps that your code performs.

If you look at this code you can even begin to see each of these steps. It might take a minute or two, but the steps are there.

Now, look at this code again:

const FS = require('fs');

const contents = FS.readFileSync(process.argv[2], 'utf8');

let lines = contents.split(/\r\n|\n/);
lines = lines.map(line => {
    line = line.toLowerCase();
    line = line.replace(/[^\sA-Za-z0-9-]/g, '');
    line = line.replace(/\s\s|\t/g, ' ');
    return line;
});
lines = lines.map(line => line.split(/\s/));

const words = lines.reduce((words, line) => {
    line.forEach(word => {
        words[word] = words[word] + 1 || 1;
    });
    return words;
},{});

Object.keys(words).forEach(word => {
    console.log(word + ' ' + words[word]);
});

The logical steps of our code are much clearer to see, much cleaner to read. The only change between the two examples is whitespace.

This process of visually splitting logical steps of code up is called meaningful whitespace. Meaningful whitespace is the art of using well placed line breaks to chunk code into discreet steps. Just like indentation, we use whitespace, new lines, to organize the logic of what our code does for other people to easily understand. And this is about more than just breaking up large chunks of text. Smaller blocks of code are easier to understand when examined than larger ones. Visually when reading this code one can easily understand that each of these steps does something in common and works together to achieve some result.

This is another reason that most programming languages ignore whitespace.

When inheriting someone else’s code it is not uncommon to find a complete lack of meaningful whitespace. The burden this places on the developer to go through the code and chunk it up correctly is significant and slows down their rate of understanding. By grouping your code into chunks that are digestible and understandable to someone else you aid them in time to understanding.

Context and Comments

Up to now we’ve talked about how to make our code more readable. Indentation and Meaningful Whitespace provides readers of your code super easy cues to help the scan and breakdown your code and this makes the code more readable. Readability is the first key to great code.

The next step to great code: context. That is, once my code can be easily read, how do I give it meaning so it can be easily understood. And the keys to understanding code is understanding the context of what any section operates within and on, and how the logic flow of any section operates.

3. Contextual Naming

Fun fact, excluding comments and data, the only things in your code that you get to name are classes and variables and functions. Everything else is a pre-defined keyword or operator of some type. This is why class and variable and function naming is so important, because it is the only part of your actual code where you can provide context to what your code does.

Consider the following code:

function x(y) {
   return y < 1 ? 0
        : y <= 2 ? 1
        : x(y - 1) + x(y - 2);
}

What this code does is not obvious by just reading the code. Providing a contextual name to the function changes everything here:

function fibonacci(y) {
   return y < 1 ? 0
        : y <= 2 ? 1
        : fibonacci(y - 1) + fibonacci(y - 2);
}

Class names, function names, variable names, these are opportunities to describe what they represent (classes), what they do (functions), and what they hold (variables). The name provides context to the role in the system.

Now, there are a few special rules around this, for sure. If iterating a number in a loop, we use i for the index. This is a well-known convention. But even this convention can stand breaking some times. If the context is more important than the convention, always go for more context.

4. Convey Logic through Comments

Class names, function names, and variables names all provide context as to their roles, but program logic is dictated by keywords and in almost all programming languages there is no way to change keywords. So how do we provide context and meaning around program logic?

That is where comments come in, and specifically inline comments within the code. I view this differently than documentation. Documentation answers the bigger question about the code purpose and how people use the code. Comments, instead help to describe the logic and the context around the logic in the code. (We will talk about documentation in greater detail in a little bit.)

Comments within your code are simple useful hints as to how the logic within the following section of code works. Have you ever read a block of code and then wondered to yourself what is going on? That is a failure of commenting. The initial author failed to recognize that another developer would understand the code, and that stems from either hubris or laziness, neither of which are particularly good habits to have as a developer.

Now, this is not an advocation for writing a comment before every line of code. I am a firm believer in writing code that is simple to follow and does not require comments to understand, but that is not always the case. And over-commenting creates a whole separate level of problem. Instead, comment where it is needed. When considering any section of code (which you nicely formatted with whitespace), ask yourself “is it readily obvious what it does?” If the answer is even slightly a no, take a second to add a comment or two of context.

This code from an earlier example is a lot more understandable for a handful of comments, and adding these comments as the logic is written takes almost nothing:

// Bring in the standard filesystem library
const FS = require('fs');

// Read the file
const contents = FS.readFileSync(process.argv[2], 'utf8');

// Split our file content into lines and
// convert each line to an array of simple words
let lines = contents.split(/\r\n|\n/);
lines = lines.map(line => {
    line = line.toLowerCase();
    line = line.trim();
    line = line.replace(/[^\sA-Za-z0-9-]/g, '');
    line = line.replace(/\s\s|\t/g, ' ');
    return line;
});
lines = lines.map(line => line.split(/\s/));

// Count the occurance of each word in all the content.
const words = lines.reduce((words, line) => {
    line.forEach(word => {
        words[word] = words[word] + 1 || 1;
    });
    return words;
},{});

// Output our word cloud and frequency 
Object.keys(words).forEach(word => {
    console.log(word + ' ' + words[word]);
});

There are two different uses of comments. First is what we covered above, to add logic understanding to your code. The second use of comments is to add documentation around how your code is used. This second type is covered in a later section, so please keep reading.

For logic comments, the general rule of thumb is to use // instead of /* */ in your code. /* */ block style comments are really more for providing documentation than doing quick one-off logic comments.

Also, when doing logic comments, do not hide them on the end of a line. Put them on their own line so that they are more visible and easier to see.

const superWeirdThing = !superWeirdThing; // DON'T COMMENT LIKE THIS

Organize Your Code

Understanding about your Code is done partially through providing context about what that code does, but equally important to understanding is organizing and structuring your code in a repeatable, understandable, and accepted fashion.

5. Organized from Top to Bottom

The first step in organizing your code is to physically lay the code out in logical sections. Almost all modern programming languages have a recommended approach to organizing your code. Different languages have different approaches to this, and even different approaches within that. JavaScript, is one of the more chaotic languages with little formal agreement on the best way to do this. However, there are still little things you can do without a formal community accepted standard. For example, import/require statements are expected to be first in the file.

The important thing here is to keep like with like. Do not put two class definitions separated by some variable assignments. Put the class definitions together. This is an implied structural organization for your code and it applies at the file level, the class level, and the function level.

A common layout pattern for a file in many languages is:

  • Require Libraries
  • Declare Constants
  • Declare Variables
  • Declare Classes
  • Declare Functions

A common layout pattern for a class in many languages is:

  • Declare Static Members
  • Declare Members
  • Declare Static Methods
  • Declare Constructor
  • Declare Methods

There is a common layout pattern for writing functions in some languages, but JavaScript is a little more lose here, so I am not going to offer a formal pattern. Instead, I will tell you how I like to structure things for a function:

  • Arguments and Argument Checking at the top
  • Declare variables as I need them, but nearer to the top if possible.

6. Separation of Concerns

In computer science the concept of Separation of Concerns (SOC) allows one to divide a computer program into logical sections based on what each section of that computer program does. In Web Development, HTML, CSS, and JavaScript is an example of Separation of Concerns where HTML is responsible for the content, CSS is responsible for how the content looks, and JavaScript is responsible for how the content behaves. Model-View-Controller (MVC) is yet another Separation of Concerns system that is often used.

But beyond a fancy CS Degree, separation of concerns means creating one thing to do that thing well without side effects. It means writing many functions that each do something well, instead of one single function that handles all the cases. It also makes more sense to break things up into smaller discreet units in terms of reusable code.

It also means letting the various pieces do exactly what they are good at and not trying to force them to do tasks that are better suited elsewhere. It is this reason that we should not write style information in JavaScript when a perfectly good CSS rule will do. CSS is optimized for dealing with these things and JavaScript is not, so why would you try to outwit it? Of course, there are always times when you have to break these rules, but they are very, very rare.

Related to SOC, in Computer Science there are three other principals to be aware of:

  • Coupling – Coupling is the measurement of how interdependent two modules or sections of code are to one another. There are a lot of types of coupling between two sections of code with a lot of fancy names and descriptions. For now it is best to just understand it this way: How dependent on A is B and vice-versa? Can A run without B? Can B run without A? Two modules are said to be Tightly Coupled, if they are very dependent on each other. Conversely, they are said to be Weakly Coupled it they are only slightly dependent on one another. Coupling in code is necessary because code executes linearly. However, designing classes and functions to be independent provides for better re-use and more understandable code. Consider some code you are looking at and that moment when you say, “Well, this calls function X, now where is function X and what does it do?” This is an example of coupling.
  • Cohesion – Is somewhat the opposite of coupling as it measures the degree to which things within a section or module of code belong together. This is usually described as having High Cohesion, meaning things within the module belong together, or Low Cohesion, meaning that they do not. In Object Oriented Programming, high cohesion of what is in any given class is a design goal. The class should do what it sets out to do and only what it sets out to do. If I have a Class called Car and inside that class there’s a function called juggle() it probably means my class has poor cohesion (and interestingly probably also implies tight coupling with something else). Side Effects from running a module or function implies poor cohesion. If function juggle() also fills the car with gasoline, either this is an amazing juggling trick, or there is a side-effect in the code.
  • Cyclomatic Complexity – Cyclomatic Complexity is the measure of how complex a section of code is, based on the number of paths through the code. That is to say, the more non-linear outcomes the code generates, the higher its Cyclomatic Complexity score. This is often judge by the number of conditional statements in your code. A function that has no conditionals, is said to have a Cyclomatic Complexity of 1. If it had a single conditional with two out comes (a typical if/else for example) it would have a Cyclomatic Complexity of 2, and so on. Cyclomatic Complexity is not bad, in and of itself, but the more complex the code, the higher the possibility of failure, and the more testing required. A function should strive for a lower complexity score if possible.

So, we talk about these three measures of code in order to reinforce the point of Separation of Concerns. A Separated module is one that is not coupled directly to another module, is cohesive in what it does, and reduces complexity.

When you find yourself in code that is hard to follow, it is almost always because it fails on one of these three factors.

One way to reduce complexity and structure your code better is to identify the cohesive sections of your code and isolate them into Logical Blocks. A Logical Block is a portion of your code that requires multiple lines to complete a single task. As we talked about earlier, breaking your code up with meaningful whitespace makes these logical blocks easier to identify. Adding comments makes them clearer in their purpose.

But a Logical Block also is a good hint as to how to abstract logic into small, more cohesive behaviors. For example, if your logical block needs to get reused, isolate it out into a new function.

Also, it is worth mentioning about Code Folding. Code Folding is a feature of most modern IDEs that allow you to hide (or fold) blocks of code. Code Folding in your IDE can be a powerful code reading tool. However, Code Folding requires certain syntax structures to work. By thinking of your code as Logical Blocks you can see where Code Folding opportunities could exist, if the logical block was in an acceptable structure.

Thinking about your code as a Logical Block allows you to more easily see other ways to do the same thing, ways that may be cleaner and more cohesive. It illustrates where there may be room for improvement in your code as well.

For example, this code is a Logical Block.

let sum = 0;
for (const num of numbers) {
    sum += num;
}
const avg = sum / numbers.length

You could change this logical block to be more readable by…

  • Abstracting it out to a function, especially if it is going to be reused.
const avg = calculateAvg(numbers);
  • Wrapping it in a IIFE (Immediately Invoked Function Expression)
const avg = (() => {
    let sum = 0;
    for (const num of numbers) {
        sum += num;
    }
    return sum / numbers.length; 
}());
  • Using a built-in library to do it simpler.
const avg = numbers.reduce((sum, num) => sum + num, 0) / numbers.length;
  • Using a third party library helper function.
const sum = _.avg(numbers);

All of these are better than the first because they create an isolated block that cleanly identifies what it does. With proper whitespacing and a comment above the block, makes this tight, readable, well reasoned code.

Code Usability

At some level, everything you see, smell, taste, touch, or interact with is a User Interface. An API, for example, is a User Interface between your backend and your frontend. This goes for the code you write as much as anything else. At some point, another person is going to read your code, run it, and even have to make sense of it. So, build your code like you would an API. Take into consideration how another person is going to interact with that code, how they are going to try and reason about it, how they are going to try and work within it.

Build everything with another user’s experience in mind.

7. Defensive Coding

Software is a piece of code that produces some sort of output, usually based on some sort of input. Now, assume someone is going to run your code at some point. Assume that that person is not going to read the documentation. What would happen? Would it work if the first argument is a null? Would it work if the second argument was a negative zero or a NaN? How your code behaves when executed under uncertain conditions is a testament to the quality of engineering employed in building it.

An approach to handling these questions is called Defensive Programming. In defensive programming you assume that users will maliciously try to cause problems with your code and therefore your code must guard against their attack. This means making the assumption that all data is tainted and must be verified to be correct.

Here’s an example:

if (this.onClose) this.onClose();

The assumptions made by this code are not defensive. The snippet assumes that onClose is a function and then executes it as a function. A corrected version of this code might look like this:

if (this.onClose && this.onClose instanceof Function) this.onClose();

But there is another assumption here as well, that executing this.onClose() will work? So even more defensive code might do it thus:

if (this.onClose && this.onClose instanceof Function) {
    try {
        this.onClose();
    }
    catch (ex) {
        ... do something about the error ...
    }
}

There is even an argument, that another defensive point might be that if this.onClose() is a long running process, it could cause your thread execution to grind to a halt and an even more defensive posture might be thus:

if (this.onClose && this.onClose instanceof Function) {
    setTimeout(()=>{
        try {
            this.onClose();
        }
        catch (ex) {
            ... do something about the error ...
        }
    },0);
}

That last form might not work for all situations, but it is extremely defensive.

The key to writing code defensively, is to checking your inputs early in your code. Make sure what you are getting is what you are expecting. If your function only works with even numbered integers, you better make sure it takes only even numbered integers.

The same is true for classes. Write your classes to assume that users are going to try to harm them and break your code. This means writing defensive methods but also, preventing unwanted or unknown class state mutation. Setters are a great way to protect your class from bad user input and should be used whenever possible instead of exposed public members. Consider this class:

class Car {
    make: 'Chevy';
    model: 'Malibu';
    mpg: 25;
    tankSizeInGallons: 14;

    toString() {
        return `The ${this.make} ${this.model} can go ${this.mpg * this.tankSizeInGallons} miles on a single tank of gas.`;
    }
}

The publicly exposed make, model, mpg, and tankSizeInGallons are all mutable without any thought for defensiveness. A malicious coder could cause quite a problem with these variables. More defensive style would be to use private variables (or Symbols) to hold the values, and provide public getter and setter functions to expose them. The setter functions, in particular, can do error checking and ensure type and values match acceptable inputs.

8. Code is a User Interface

Your code, whether a component, a utility, or an API, is going to get used by someone else. This is even more true when one considers that sharing of code online has become pervasive. And if you work in a team setting at all, this compounds the likelihood someone else will need your code. I do not care how throw away you believe your code is, assume that someone is going to use it.

To that end, and as stated before, treat your code like a User Interface. This means designing your code to be read is important, but also designing your code to be used by others is equally important. Ask yourself how another user is going to come in contact with your code. If they see the function signature, does it provide enough information as to what it does and how it works or does it require additional information? If they read your code, is it obvious how they would step through it from line to line or does it require more contextual logic?

A lot of what we have already covered in this document describes how to address writing code toward usage by others, but it bears repeating here:

  • Design your code to be read by others.
  • Highlight hierarchy with indentation.
  • Separate discreet sections of behavior with whitespace.
  • Name classes, functions, and variables contextually.
  • Provide comments around programming logic.
  • Clearly Separate your application concerns.

Following these steps is the first part of writing code like a user interface.

The second part is to always be asking yourself, how will others use my code? Challenge yourself to always be mindful of this question and your code will be the better for it.

Most professional coding work is a shared environment where everyone is always working on each other’s code; any team experience really is. As such it is critical to always be considerate of how others are going to use your code. Whenever you write a new component you should be consciously aware that someone else may very well need what you are building for their own purposes. Write your component with that reuse in mind. But also, be insatiable in your desire to make your component as accessible to another developer as possible. The more we reuse one another’s components, the better our product becomes.

9. Documentation

Finally, let us talk about documentation. To many software engineers the act of writing documentation is seen as an after thought, something to be farmed out to some loser English Literature person desperate for a job. But the reality of software engineering is that it is only 50% about writing code. The remaining part of Software Engineering is communicating with other people.

As we have stated above: someone else is going to use your code. It is inevitable. Given that, telling those users how to use your code is critical. If you fail to communicate how your code runs, you might as well not have written the code at all.

We talked about comments in a prior section. Comments are used to describe the logic your code follows. Documentation, on the other hand, describes how a user of your code interacts with the code and gets it to do what is expected.

Unlike inline comments to describe logic blocks, documentation is about describing the interfaces of your code, the places where others will interface with your code. We write documentation to tell others whom are using the code what it does and how it is used. This does not mean you document every single function and variable. Instead this means you document every public facing part of your code, every place other will interact with it.

Fortunately, in this day and age, you do not even have to lift your eyes out of your IDE to write documentation. Any modern language today has an associated documentation language built into it. Java has JavaDoc, Typescript has TSDoc, JavaScript has JSDoc, etc. These documentation languages assist you in describing your class, function, variables, every aspect of your code as much as you want.

Even better, most modern IDEs have these documentation languages built right into them and have tooling that makes writing that documentation easier.

Here’s an example of using JSDoc:

/**
 * Move the selection to the "first" section.
 * @return {void}
 */
selectFirst() {
    const first = this.sections.find(section => section.startHere);
    this.select(first || this.sections[0] || null);
},

In this example the comment block that precedes the selectFirst() function describes what it is and what it returns.

Documentation languages vary by programming language, so please read up on your particular one.

One note about documenting that to address: Your code is not self-documenting. Many, many people have said this over the years and every single one of them has been wrong. What they are really saying is their code is Readable and that is enough. In some case this is true, but mostly it is not. Please add documentation to your code about what it does and how it is used.

Denouement

So that’s it. Above we outlined nine (9) separate ways to make your code more readable, more descriptive, more defensive, and more user friendly. It’s a lot, to be sure, but each of these approaches is a small step towards the larger goal of writing better code that is more than just runnable.

We recognize that this could be seen as a lot. We all know that software is a demanding job full of demanding bosses and customers who think it is super easy. And these best practices they take additional time, they cost cycles, and adding them can seem overwhelming. However, starting small is the key here. When considered as a whole, these rules can seem daunting. But individually, they are not so rough. Try adopting one or two of these at first, then a few weeks later add another, until they are all second nature. It’s a good bet you are probably already doing one or two of these without even thinking about it. So, add a few more, then a few more, and next thing you know you will be shipping code you are proud to have others to read.

permalink: http://www.arei.net/archives/302
THE HUMAN RELATIONSHIP PACKAGE – LEARNING FROM EVENT-STREAM

Authoring and sharing a package is a relatively simple activity these days. Systems like GitHub and npm make it easy for an author to share and publish their work. This is a great thing; it is a wonderful time to be a developer and involved in a developer community.

However, there is a cost to sharing and releasing one’s code that is often unwritten and overlooked. Authors may feel like they are sharing their grand ideas with the world, but they often fail to understand the expectations that go along with publishing a package. Likewise, users think that they can just download a package in an all-you-can-eat buffet of open source software, never realizing that the very success of the package they are downloading requires them to participate in the community of that package. Both sides have responsibilities in the situation and for either side to ignore those responsibilities is a recipe for failure.

Several months ago the JavaScript community dealt with this very problem. A highly respected package author, had shared a package (event-stream) he wrote for fun and it got very popular. As is the case, popularity came with more requests for support and bug fixes. In short order requests became demands, and the author’s time and interest lagged. All the while, very few users where willing to step up and help. So when someone did step up and help, the author accepted and turned over the repository. Shortly thereafter malicious code was introduced into the package and everyone who relied on the package became an potential victim.

In many ways event-stream was a kind of “perfect storm” event that may be unlikely to occur again. Yet, the event-stream saga is a very real example of the cost of publishing code for others to use. Some would blame the author for not providing a better notification that he was handing the package off to someone else. Others would blame the users for not supporting the package either financially or through participation. Still others would point out that this is a professional community and as a professional one is required to know and understand how their software works and the systems it relies upon work. Yet, no matter how we throw the blame around, the reality is everyone paid the price for the failure, and everyone (the author, the users, and the greater community) is culpable.

A dependency, a package, is more than software; it is a relationship. Like all relationships, it comes with a set of expectations between the parties involved: party A in a relationship expects certain things of party B, and in turn party B expects certain things of party A. When either side fails to live up to those expectations that is when the relationship degrades and fails and people get frustrated, hurt, and betrayed.

Consider what one expects when downloading a software package:

  • Function: The user of a package expects that when a package is given X, it will return Y. This is the expectation by the user that a package will work and that it will meet their needs. Occasionally a package fails to work or meet the needs of the user, at which point they uninstall it and the relationship is over.
  • Constraint: The user of a package expects that a package is constrained to doing only what it is intended and described to do and nothing more. That is, the user expects a package to not do other things whether malicious or not.
  • Support: The user of a package expects the author of a package to provide some level of support. This includes well written documentation, responsive bug fixes, avenues of communication, and the addition of new features.
  • Notification: The user of a package expects to be informed when the package changes, how it changes, and why those changes were introduced. Notification allows the author to communicate with the users directly.

But what a lot of people overlook is the other side of the relationship having its own expectations:

  • Feedback: The author of a package expects that the user of a package will report issues with the package. One can never predict all of the edge cases where code may have problems. The author is dependent on the user providing the details about where those edges are and where they fail.
  • Reciprocity: The author of a package expects that the user of a package will help maintain and grow the package. This is a critical part of the open source equation: that everyone shares in the maintenance and growth of the package for the betterment of all. Without this participation most packages languish and die.
  • License: The author of a package expects that the user of the package will use that package in accordance with the licensing terms. The license is an expression of the legal wishes of the author, but the author lacks little ability to enforce such things. Instead, the author relies upon the relationship with the user to encourage these legal wishes.

When any one of these expectations from either side is broken the relationship is strained. Some times that strain can be minor and survivable, but often it can be completely destructive. In the case of the event-stream story from above, the users failed to fulfill the Reciprocity expectation which in turn caused a snowball effect causing other expectations to fail, which eventually led to critical failure of the entire package.

So how does one address these problems? How do the authors and users of a package prevent failures from occurring in the future? How does one ensure that the participants in our communities get what they need to not just survive but thrive as part of the community?

It begins by recognizing that there is still much work to be done on the systems in use and how the community uses those systems. Now, many of the tools at the disposal of package authors are geared toward providing and maintaining some of the expectations outlined above. GitHub, for example, can be said to be helpful in some of these expectations, but lacking (whether intentionally or not) in many of them. Certainly it enables the Support, Feedback, and Reciprocity expectations, but there is still much room for improvement.

While stronger more supportive systems and tools will help address some of the community problems, one must also be willing to examine and address the individual and community aspects as well. In many ways, the current situation in which the community finds itself is a function of getting better tools without considering the social engineering those tools also require. So any solution must also examine the people involved.

The authors need to to understand that releasing software is not a “fire and forget” situation and that the users have expectations of the package and its author. There must be a realization that by sharing code with the world, one is creating a community that requires nurturing to grow and thrive. When we publish our code with GitHub it is no accident that you immediately get an Issues tab and a Wiki tab. These are the author’s tools to build a community. And as the author one must be ready to accept that they are the leader of this community. That means meeting the expectations of that community to the best of your ability: maintaining the software, ensuring it is safe to use, supporting it as needed, and updating your users about changes.

However, nobody is suggesting that sharing code automatically dictates a lifetime of servitude. Authors must have the tools and community support necessary to lead within the confines of their time commitments. Part of nurturing and enabling a community is knowing how to ask for and accept help when it gets overwhelming and how to apprise your community about the situation.

Yet the author is not alone in this package relationship: the users bear an even greater obligation to the relationship. Every time a user downloads a new package and does not find some way to participate that user is helping to contribute to that package’s failure. It sounds harsh, but one cannot continue to treat open source software as a free ride and expect to suffer zero consequences.

The manner in which a user contributes back to an open source project comes down to two possible actions: time or money. A user can donate time by helping to improve and grow the project: opening detailed issues reports, submitting pull requests which fix bugs or add features, improving documentation, offering support to newer users, helping to lead the project, all of these activities take time from the user and give it to the author to help in the project. Time is the most critical thing an author needs to maintain a project. However, if time cannot be found or is prohibited or prevented by one’s employer, consider donating money or asking the company to step up and donate money. This can be tricky for some projects as questions about who gets the money, taxes, etc come into play. However, if there is an avenue to donate, please use it. If not consider asking the project to add one. Also, for some projects it is possible to buy advanced licenses or support agreements and this can be considered as method of getting money into the project.

Ultimately a package is only as strong as both the author and its users make it. It is incumbent upon both sides of the author/user relationship to commit to their expectations. Anything less is setting oneself up for failure. The event-stream incident is a lesson from which current and future packages can learn. The users share as much, maybe even more, culpability as the author does. An unwillingness to see that is an indicator of a willingness to fail again. And the big failure of event-stream is not that malicious code got injected, it is that no one will learn from it.

Supporting materials and further reading:

https://medium.com/@cnorthwood/todays-javascript-trash-fire-and-pile-on-f3efcf8ac8c7

https://www.zdnet.com/article/hacker-backdoors-popular-javascript-library-to-steal-bitcoin-funds/

https://gist.github.com/dominictarr/9fd9c1024c94592bc7268d36b8d83b3a

https://changelog.com/podcast/326

permalink: http://www.arei.net/archives/293
My Thoughts on Speaking at JSConf Last Call

Speaking at JSConf Last Call is an amazing experience, and not for the reasons you might think. While the experience is still fresh in my mind I wanted to set down my thoughts and observations. If you are finding this too long, skip to the end for my “lessons learned.”

In case you were not at JSConf Last Call here are the pertinent details… I, Glen Goodwin and Todd Gandee gave a talk together entitled “We are Hacks and have been Stealing code for Years.” The talk was about how we all steal code because it is part of the process we use to learn new things and how it is our responsibility to make sure others in our community learn this process. We were very well received mostly due to Todd and I styling the talk as a quick back and forth dialog, hitting the major points we wanted to hit along the way. The audience enjoyed our presentation style, laughing a lot, but listening when we got serious. It was an awesome audience. The video is coming soon. This was my first ever talk at a technical conference and Todd’s second (but largest) technical talk. We are both super grateful to JSConf for giving two unknowns a huge chance on a relatively unknown subject.

So that’s the back story. Now here is where I tell you about everything else. The rest of this article details how we came up with the idea, our process in creating the talk, what is was like to give the talk, and most importantly what we learned from the process.

HOW IT ALL CAME TO BE

So the story of how our talk came to be begins with a Tweet. Todd Gandee, Chris Aquino, Eric Fernberg, and I had all met at JSConf 2013. And every year since then we’ve all tried to make it back to the subsequent JSConf events. Each year when JSConf announcements are made, a tweet goes out among us asking about who is going. For JSConf Last Call this was no different and I asked everyone who was going to go?

Todd replied that he was thinking about putting a talk together.

To this I replied offering to co-speak with him or help out.

It was a pretty big move for me to offer to co-speak. I had always thought about talking at JSConf but never really felt like I had anything worthwhile to say. (See Kevin Old’s talk about Imposter Syndrome) So my desire to speak was there, but I lacked what I felt was a really big idea. But more than that, the idea of speaking scared me a whole lot; not because I am afraid to speak publicly, I have little fear of public speaking, but more because I was afraid to put myself out there. I remember agonizing over even sending my offer to co-speak or help Todd, but eventually I just decided to risk it.

Todd’s answer came back four hours later and we were on the phone talking within a day. He was in.

The good news was that Todd had a good idea for a topic: The great news was when he told me it I got excited. I knew I was excited because my brain kept coming up with new ideas, new angles, new thoughts. When I can’t shut my brain up, that’s a really good sign and the longer my brain keep spinning on a subject, the better the idea.

So we opened a google doc and started spit-balling ideas.

SUBMITTING THE TALK

There were going to be some barriers to working together on this talk; primarily distance. Todd lives in Atlanta and I live in Baltimore, about 700 miles apart. Yet we knew the internet was full of tools for collaborating and we could employ them to our advantage. Initially we started with a Google Doc into which we put our ideas. Then came Google Slides for actually building our slide deck. Eventually we turned to Google Hangouts and ScreenHero for rehearsing together, but that’s getting a bit ahead in the story.

I want to tell you that Todd and I got together every couple of days and constantly refined our idea, worked out the story we wanted to tell, built the slides, etc. I want to tell you that but it would be a complete lie. Life is hard and gets in the way a lot. We are no different, so there was a fair number of stops and starts and really long breaks.

Initially we started by coming up with the JSConf submission form answers we were going to need. After all, there was no real point in working on a talk if we didn’t get accepted. So we crafted a Mission Statement. Well, really more of a presentation abstract, but I really wanted to fit that Jerry Maguire link in there. Then we massaged the abstract into the JSConf submission form.

We had a lot of questions though… Would JSConf allow a pair speaker presentation? Could they even technically support two speakers? Was what we were proposing a good topic? We emailed our questions to various JSConf people we knew from past years. I reached out to Chris Williams, having met him a number of times in mutual local community events. Todd reached out to Derek Lindahl whom he was friendly with from prior JSConf events. We wanted to know if we even had a shot and we wanted to be clear that we were willing to work with JSConf. We didn’t want the fact that we were going to have two speakers put any additional financial burden on JSConf. We were happy to pay our own way, so JSConf didn’t need to comp us extra because of a second person.

The results of our contact with the JSConf staff met with mixed results. I knew Chris was really busy with real life things and was not surprised when he never got back to me. Todd had a little better success with Derek, but it fundamentally came down to Derek saying, “Just submit it. If it’s good it will get selected.” That quote, from Derek, by the way, is the answer to always tell yourself if you are thinking about submitting. Just stop second guessing yourself and go submit it already.

So, on the last day of the submission deadline, after going back and forth a few times on our answers, we submitted our talk idea.

Here’s our initial submission abstract:

Two intrepid developers, who met at JSConf, examine the relationship of sharing code, community, and developer growth throughout the short history of making programming more art than engineering.

In this talk, Glen Goodwin and Todd Gandee will walk back from the present day to the “ancient” past of Babbage and Lovelace discussing how the act of “creative borrowing” influences learning and understanding for computer programmers; how we learn by observing and deconstructing the work of others to make it our own. This includes an examination of past and current models used for “stealing” the (mostly) freely shared knowledge and past work of others like Github, StackOverflow, View Source, and Byte Magazine. Our talk emphasizes the importance of inclusive conferences like JSConf in the growth of junior and senior software engineers. Programmers’ tools of today illustrate the apprentice/mentor relationship more akin to the arts than engineering.

On October 20, 2015, we were officially notified of our acceptance. It was a glorious moment, getting accepted. Rachel White in her own JSConf Last Call talk said she ran around the building screaming upon being accepted… There may or may not have been some happy dance moves on my part; I admit nothing.

THE FIRST DRAFT

And then it sunk in… Now we have to write the damned thing.

Initially we started just coming up with ideas. We had the basic theme of our talk in the form of our title “We’re Hacks and We’ve been Stealing Code for Years.” Great title, but what did it really mean, what does “Stealing Code for Years” really imply. Also, we knew that this being JSConf’s swan song meant something special to us and that we really wanted to show that. In the shared Google Doc we just started throwing out ideas, snippets really, that we thought might be relevant, possibilities.

And then neither of us touched it for weeks. Like I said, life is hard and things get in the way.

Yet, the thing about an exciting idea for me, is that my brain never really lets it go away. So while neither Todd nor I talked about it or added anything new to the Google Doc, my brain was constantly spinning things around, you know, like in a background thread.

Then one day, while sitting in my favorite lunch spot, having my favorite lunch (beer), I had a moment, a vision, an inspiration. “We should totally open the talk wearing ski masks, like we’re trying to protect our identity.” So I pulled out my iPad and proceeded to write this down. I knew that in order to pull off a ski mask based gag, the dialog would need to be very quick, so I decided to approach it like a theatrical scene using a script. And once I started writing it, once I started working in the script format, the words just poured out of me. It helps that I was an English Literature major in college and writing comes very, very easy to me.

So, yes, the first ten minutes of the first draft of the script was written in a bar, on an iPad, while consuming beer.

Explains a lot.

THE SECOND DRAFT

After some initial conversation with Todd about this new script, we again stopped working on the project for a few more weeks. While the first ten minutes of the script was done, the rest wasn’t really coming to me. And what little I did add after the initial burst was a little disjointed. We had all these ideas, but we lacked an organization for the ideas.

And then inspiration struck Todd.

Todd is a very visual thinker where I am a very textual thinker. He thinks by drawing stuff out, where I’m more of a writing stuff down kind of person. They are two different approaches, complimentary at times, but in-congruent at others. So Todd was having trouble thinking about the talk because we hadn’t organized it visually; I was having trouble thinking about the talk because I couldn’t figure out where to go next. And we were both stuck.

And like I just said above, inspiration struck Todd. He called me up. “I made a Trello board to just write down all the slides we need. I need to organize how this is going to go.” So we fired up Trello and we started to work. We first outlined the major points we wanted to hit, like talking about the history of Code Stealing or the section on Community. Those became our Trello columns (Trello Boards). Then in each column, we put the specific points we wanted to make like talking about NodeSchool or Women who Code. Then we could move the Trello columns around to come up with the best way to present the story we wanted to tell, the progression from Stealing Code to Community.

I cannot stress enough how much what we did in Trello saved our talk. It was so instrumental to just organizing what we wanted to do. And once we had that, the script I had to write became super easy to do. I literally copied all the column names out of Trello into the script as section headers. Then a copied out the specific points of each section into the script. A little bit of refactoring on the introduction part, and the script pretty much seemed to write itself.

The Script was completed three days after we did our work in Trello. Well, the second draft was completed. The final draft of the script wouldn’t be done until about two days before our talk was to be given. It probably would have been tweaked right up to minutes before the talk, but we had to pull the slides out of Google Slides and into Keynote to protect against conference internet latency. That meant Todd had the latest copy so I was prevented from rewriting anymore.

TODD MAKES SLIDES

The plan was for me to write the script and Todd to work on the slides. But in order to make the slides, Todd needed a sense of where the slides would go, how they would fit into the script. So we decided that as I wrote the script I would put slide changes in as scene notes, like this:

GLEN: That’s my point… We have an entire industry of tinkers, it’s baked into what we do. [SLIDE: Tinkering, or breadboard in state of repair]

Also, since the script had a certain flow I had an idea of what slides I thought we should use. Plus I knew there were just some slides I had to have in the talk because I love the images, like this IT Crowd one… Mandatory for any talk in my opinion.

So once I really got started on the script, Todd got started building the slide deck out. He started collecting images and putting them in order. I am firmly in the camp that you should never make your audience read your slides, so we agreed to minimize that. Use the images to enhance what we are speaking about, so the content of the talk is the focus, not the images. Turns out when you throw a bunch of humorous slides and make people laugh that also kind of pulls their attention away from the content, but we’ll talk about that later.

Of course, while all this slide work was being done the script was constantly being tweaked, the slides were getting tweaked as well. I was making sure to re-read the script 3 or 4 times a day, fixing typos and refining the flow. Todd was continually adding more slides and I was continually offering more suggestions.

The entire process was very iterative. I’m not sure how much Todd started to hate me at this point because I kept changing the ground out from under him. I tried to minimize the impact of my changes, but it happens. There was also a lot of places where I would change his slides and he would change my script. We had to completely throw out the idea that while I wrote the script and he was doing the slides, neither of us owned our respective parts anymore; everything was shared.

Meanwhile, we both set out to memorize the script. Big mistake.

REHEARSING OVER THE INTERNET

See, the script I wrote was roughly twenty (20) pages of mostly rapid back and forth dialog. Neither Todd nor I have ever done any acting at all, we had zero experience memorizing lines. And let me tell you memorizing lines is incredibly hard. I am a huge live theatre fan, especially Shakespeare, and have always had a lot of respect for actors. In the weeks leading up to the conference my respect doubled, tripled, then doubled again. My hats go off to the actors that can memorize their roles in iambic pentameter.

00000001So we came to the realization that we were not going to be able to memorize our parts. Instead, we were going to have to read from the script as we walked through the slides. So we had to cut and paste each section of text from the script into the slide. Also, because of the rapid pace of our dialog we would need at least a few lines from the next slide showing on the current slide. This, it turned out, was a fair amount of work; and as we kept refining the slides going further we also had to refine the notes for the slide, and the text from the prior slide, and the text from the next slide. It was a constant battle of keeping everything aligned.

About five days before the conference we had our first “live” reading together using ScreenHero and Google Hangouts. We set some time aside on Tuesday night at 9:30 after our respective wives/partners had gone off to bed. I remember telling my partner Jennifer that I would be up “in about an hour.” I went to bed at 1:30am that night. We managed to read the script completely through exactly twice.

This is where I feel the real work began. We moved some slides around, changed some images, changed a bunch of dialog placement, and all that. Every single slide and line of dialog was tweaked and reviewed. It was constantly a work in progress and as I said above, changing slides around meant a lot of rework to get all the alignments correct. Uggh.

I think prior to our first reading that I figured we’d rehearse a couple of times, do the Google Hangout things, then maybe a few reads the day before our talk. Except it became pretty clear right from the first reading that the timing of our dialog was going to be everything. The opening bit with the masks was especially hard for me to get down because of the constant interruption nature of those first 20 lines.

After working into the wee hours on Tuesday, we decided we needed to do it again the next night.

Wednesday night at 9:30 I told my partner Jennifer once again, “This should be much shorter, we just need to read it.” I went to bed Wednesday night at 2:30am.

THE DAY BEFORE THE DAY BEFORE

On Thursday I flew down to Jacksonville. It’s a two hour flight from Baltimore, plus a few hours sitting around in the airport. I re-read the slides a dozen times. I had one particularly long speech that I just couldn’t seem to get down so I keep going over it over and over again. I’m pretty sure the couple sitting next to me on the plane thought I was some sort of crazed lunatic because I just kept staring at my iPad and mumbling to myself under my breath; and then every time the “Points for Glen” slide would come up I would throw my arms up in the air. I’m pretty sure there wasn’t a sky marshal on the flight or I would have been detained.

The preceding Monday Jan Lehnardt had contacted me about sharing a shuttle van from the airport, since he, I, and Patricia Garcia were all arriving at the airport at the same time. I had already booked a rental car, so I just invited them to join me in the drive. Best thing I ever did. Jan is a seasoned pro at talks and Patricia had just given her first one at JSConf EU. I quizzed them both mercilessly for tips and tricks and perspectives. They were both very open and sharing, despite having been awake far more hours then they should have and that it was like 2am in their local time. I appreciate their company and helpfulness.

Additionally, Jan pointed me at a blog post he did on the subject of speaking at conferences which he emailed me later. It’s a great quick read and you should totally read it. It also gave me the idea to write this up as well and share my own experiences. So much thanks to Jan and Patricia.

THE DAY BEFORE

Todd (and another friend of ours Chris) arrived the next morning. Let the dress rehearsal begin.

Remember when I said I had constantly been tinkering… It’s true. The first thing I did before we even rehearsed was drop a slide and 2 lines of dialog. We also realized (remembered actually) how shitty the hotel WiFi is. This is important because we had used Google Slides to write our presentation. The first time we ran through the Google Slides on site we realized waiting for each slide image to load wasn’t going to cut it. We ended up exporting the slides to Keynote. For the most part this was fine as most everything moved over except for one crucial part: We had done some color coding of our speaker notes to indicate text said on a prior slide, or text coming up on the next slide, or even whose line was whose. That didn’t transfer over. Todd was a trooper and reformatted all those things Friday night instead of sleeping.

We rehearsed probably eight times that day. Chris (our other friend) came in and pretended to be our audience despite the fact he had heard our talk five times already at this point. Him being there was so critical because it acclimated me to the idea of an audience. Let me focus on them instead of always looking down at the slide notes.

A large part of our practice on Friday was just getting comfortable with each other and learning the timing and the delivery of each line. We had never given a talk together, so just learning each other’s cues was really important. Creating a rapport between us was really one of the biggest success we had in our talk and that rapport was entirely due to spending a day practicing. By learning each other’s cues we also learned how to respond to each other conversationally. This turned out to be critical because I don’t think we ever said the same line the same way twice. There was a fair amount of “scripted improv”. That is to say, while we would be saying the line, we each would modify the line or the intonation or the pace when we actually said it. It made for a much more comfortable dialog.

Practice also proved out that our method of reading the slide notes would work amazingly well without the need to total memorization. Yea, I did end up memorizing a lot of my lines, but what I had a big problem was was knowing when the line was supposed to come. However, because we had a dialog between both of us, whenever Todd was speaking I could glance at the script and read my next line.

Another thing that we changed during practice was who was running the slides. Initially I ran the slides during our remote practices, but it seemed to get in the way when we were rehearsing together. We tried splitting who was in charge of what slide, but it didn’t work. So Todd just took it over and did a far better job than I had. I think he’s more used to using the clicker and was less worried about timing the slides with the dialog. I also think he really wanted to take over running from the very beginning, but I was unable to hear his desire. Sorry Todd. You did an awesome job running the slides.

GIVING THE SPEECH

Honestly, I have very little recollection of what happened during the speech.

I couldn’t tell you how Tracy introduced us, but I’m sure she did a great job. Everyone knows she’s awesome.

I know a lot of people laughed, which was so amazing. I knew we had a few jokes in there, but never expected our audience to laugh as much as they did. And the laughter started the minute we stepped on stage. I’m told we looked utterly ridiculous. I remember telling myself prior to the talk that “if we got some laughs, just wait a second or two for them to die down before continuing.” Problem was, in the intro, the laughter didn’t stop. People genuinely thought what we were doing was funny. That made the speech for me right there.

After that point, everything was coasting.

I know I made two mistakes, but they weren’t huge and I was okay with that. I remember toward the end of the talk, Todd read one of the lines I was supposed to read. So I read his. And we kept going reading the other person’s line. Nobody seemed to notice, so we went with it. I think the practice really allowed us to do that. Well, that and it was pretty close to the end.

I do remember Zahra‘s game of twenty questions, but having missed all the prior talks due to rehearsals, I had no idea why she was asking us these questions. It seemed odd, but I rolled with it. Turned out to be a lot of fun… I improv fairly well. I think Todd doesn’t do as well and was less comfortable.

AFTERSHOCKS

Let me tell you what it’s like after giving a speech…

First, everything about your talk, all that memorization that you did? all of it gone. My brain literally emptied of everything regarding the talk within a second of exiting the stage. Well, that’s not true… I know all the themes and points we hit during the talk, but the lines we memorized? Can’t think of a single one. A few have surfaced back to memory over the days since, but by and large all the memorized text is gone.

Second, I was full of energy. My body was positively charged and I just wanted to bounce around. We had given the talk, everyone laughed, it was great. I was humming.

People came up to use and said good things. Chris Williams came over and complemented the hell out of the talk. That meant so much to us that he enjoyed it. Todd and I had said that if just one person came up to us after and complimented the talk, then it was a success. That the first person to do so was Chris made it all that much better.

We shook hands and said “Thank You” a lot. We met new people, we grew our community, it was amazing.

And then the crash… I think your body is running so high building up to your talk, that when it’s all over the adrenaline goes away and all you want to do is sleep. I almost fell asleep in one of the talks after ours. Todd was in the exact same state. Power nap time. Its amazing how much your body can recover in a 30 minute nap. Try it some time. 30 minutes is the perfect nap length.

WHAT DID I LEARN?

Don’t agonize over submitting your talk, just do it. You are completely capable to give a talk and people genuinely want to hear what you have to say. Stop thinking about it and go submit a talk. Do it now, I’ll wait.

Don’t give a talk with another person unless it’s absolutely critical to do so. It’s really, really hard to do. We were very lucky in that Todd and I work amazingly well together. YMMV.

Outline before you write, or make slides, or whatever. The outline is they key to organizing your thoughts.

You don’t need a script like we had, but have solid notes that are easy to refer back to.

Don’t make your audience read your slides.  Use images to enhance what you are saying.  But, be careful of the funny image as that can cause people to not pay attention to what you are saying as well.

If your slides depend on timing, make sure your notes have the text that is immediately following your slide change so you don’t lose your place.

You can never practice too much. I think overall, Todd and I rehearsed together somewhere between fifteen and twenty times. When we were not rehearsing I was reading the slides over and over again.

Delivery is everything. The timing, the intonation, the mannerisms, they all play into the performance.

It’s a performance. I learned this from Jan’s blog post (see above) and its absolutely true. Even more so for us where we actually had a script.

If you do slides in Google Slides, export them to PowerPoint or Keynote for the actual presentation. Never ever rely on conference WiFi.

You are absolutely going to make mistakes during the talk; just roll with it. Take a second, move on. Repeat your line if you have to. Like I mentioned above, Todd said one of my lines near the end of our talk, so I said his next line, and for the last 10 or so lines of the talk, we were inverted. We never rehearsed that, so it was completely unprepared for, but we were pros at reading the script by then, so nobody noticed.

As Chris Williams told all the speakers in a conference call prior to the conference, everyone in the audience wants you succeed. They’re your peers, co-workers, friends, and family.

Thanks.

permalink: http://www.arei.net/archives/289
Annoucing npmbox

So, I have problem with npm in my current work. Basically it’s this: the systems I work on are disconnected from the internet but I still need certain node modules. So in order to get a module over to these systems I have to install it locally, zip it up and move that file. But technically, that file is an INSTALLED version, not the raw version of the install. So what I need is the raw installed files of the npm install, and then I need a way to install from those files.

I recently filed a feature request for npm describing this very request. Yet, I don’t think everyone understood the request, plus I thought it wouldn’t be that hard to actually build it. So I did.

Announcing npmbox. A command line utility add on for npm that when given an npm package

npmbox <package>

downloads the package and it dependencies and creates a .npmbox file which is a .tar.gz file of all the raw files necessary to install the package which includes the package, its dependencies, and its optional dependencies.

Taking this .npmbox file, you can then move it to your offline system and issue an npmunbox commmand against it.

npmunbox <npmbox-file>

This will take the box file, unzip it, and install its contents into the current folder.

Thus this achieves the process I am looking for, bundling a package for offline use and then installing from that bundled package.

My sincere hope is that the npm people take a look at the functionality and actually add it directly to npm. I’m not expecting they use the code, just the basic idea. I’m sure there are far better ways to achieve this from inside npm than what I am doing. So have at it.

permalink: http://www.arei.net/archives/274
Hiring Cleared JavaScript Developer, Ellicott City, MD

Generally I avoid recruiting people on behalf of my company, but they have recently asked me to hire a person to work directly for me and to hopefully even replace me in the future.  And, quite frankly, I’m tired of interviewing people brought to me by the company recruiter who have no understanding of what is really involved in writing code.  So, I wrote this way more interesting sounding job description and now I’m looking to find someone to fill it.  Are you that person? Know someone whom is that person? Let me know with a quick email sent to vstijob/at/arei/dot/net.

Are you an up and coming hot shot Web Developer who somehow ended up working on government contracts?  Are you horrified by the state of technology that the government contract you are working on has saddled you with using? Do you wish that your project would stop supporting browsers that are 11 years old and move into the future? Do you want to move beyond writing JSPs, Web Services, and XML? Are you ready to give into the dark gibbering madness that comes with embracing technologies like JavaScript, CSS3, JSON, and NodeJS? Are you ready to awesome-ify your career?

VSTI, A SAS Company, is looking to hire a Junior to Mid-Level Web Developer/Engineer and we want it to be you. Well, we want it to be you if…

  • You have a good basic understanding of JavaScript and are ready to learn a lot more.  And by ‘basic understanding’ we mean that you know what a closure is and how to write one in JavaScript, but you really want to understand why this makes JavaScript so powerful and all the really cool things you can do now that you know what a closure is.  We’re not looking for a JavaScript expert, but someone who wants to become a JavaScript expert.
  • You know more about HTML/CSS than just how to write tags and believe that 90% of HTML should be written using only DIV tags and some amazing CSS. In fact, you should be able to tell the difference between block and inline elements at a glance.
  • You find the possibility to use cutting edge technologies like NodeJS and ElasticSearch to be darkly enticing. In fact, just the act of us mentioning that you could work with those technologies makes you giggle like a mad scientist.  After the giggling has died down you decide to go read the documentation just for fun.
  • You have a serious passion for technology and want to learn more, a lot more.  Ideally, you wish you could learn everything through some sort of cybernetic implant process, but you realize that you still haven’t invented that yet and the only way to learn is to read and to get first-hand experience.

Here’s the obligatory job posting details…

  • You must be willing to work in Ellicott City, MD on a daily basis.  No telework is possible.
  • You must have an up to date TS/SCI Full Scope Polygraph clearance.
  • You must have a solid understanding of JavaScript, CSS, and HTML.
  • You must be willing to be figuratively thrown into the fire.
  • You must be passionate about learning new stuff.
  • You must desire to become an expert in Web Technologies.
  • You might also have an understanding of any of the following… Java, Groovy, Ant, Grails, more than one JavaScript frameworks (like jQuery or Prototype), Agile/Scrum/Kanban, CSS Resets, SVN, Hudson, CSS3, Rally, Apache Tomcat, or Git.
  • If you have a Github account, a blog, or an active twitter account that will help your cause greatly.
  • You might also be competent with some art software like Adobe Fireworks or GIMP, but it’s not a requirement.

In return for all of this VSTI, A SAS Company, will provide you with the following…

  • A very competitive salary.
  • Amazing benefits that you not likely to get elsewhere.
  • Small company feel, Large company resources.
  • An amazing chance to learn and grow your skills.
  • Patriotic pride in what you do (since this is a government contract after all).
  • The opportunity to be part of a team where your opinion is valued and taken into consideration.
  • The possibility of free beer if you like that sort of thing.

So, if any of this sounds cool to you then you should apply for a job.  We’re interviewing now and we’re looking to hire very quickly.

permalink: http://www.arei.net/archives/260
node-untappd v0.2.0

Just a quick note to say that last night I released to Github and NPM v0.2.0 of node-untappd, the Node.js connector to the Untappd API.  This new version supports the latest v4 of the Untappd API including OAUTH authentication.

Lots more details at Github: https://github.com/arei/node-untappd

permalink: http://www.arei.net/archives/258
Introducing functionary.js

I am pleased to release another open source project, functionary.js.

functionary.js is a simple JS polyfill and enhancement for adding additional behavior to functions.  Fundamentally, it allows developers to modify and combine functions into new functions to develop novel patterns.  It is very similar to sugar.js, prototype.js, and the jQuery Deffered object and behavior.  It is sort of like Promise/A but not really like it at all.

Functionary is a prototypical modification to the Function object.  This may cause some users considerable consternation as there is some belief that JavaScript should not be used in such a manner.  I, however, believe that JavaScript was actually designed to do this very thing and should do it albeit with great care.  That said, functionary.js never will overwrite a function of the same name on the Function Prototype.  So if Function.prototype.defer is already created by another Library (like Prototype JS) functionary.js will not overwrite it, but use it instead.

functionary.js works in both browsers and node.js implementations.  However, functionary.js does modify the global Function prototype and should be used with care and full understanding.

It breaks down into two distinct classes: Threading enhancements and combination enhancements.

For purposes of the following discussion, the following things are true:

var f = function() {};
var g = function() {};
var h = f.something(g);

In the above example, f is the bound function of something and g is the passed in function.  Understanding this will help the discussion below be more clear.

Threading Enhancements

While javascript is inherently single-threaded, it is possible to create threaded like behavior using the single thread model and eventing.  This is not a replacement for true multi-threading, but an okay approximation and the fundamental underpinning of javascript and node.js.  functionary.js provides a set of function modifiers for working within the confines of this “threading” environment to produce cleaner and expanded function behaviors with regards to threading and event behaviors.

The following functions fall into this group: defer, delay, collapse, and wait. defer and delay will be familiar to many JS developers, but collapse and wait are the really interesting additions.

collapse basically will collapse multiple executions of a function (call it f) into a single execution. So regardless of the number of times you call f() the actual execution will only happen once… the executions are collapsed into a single execution.  collapse can be used in one of two forms: Managed or Timed.  In Managed mode, the execution of a function is not actually performed until manual execution of the f.now() is called.  Thus repeated calls to f() are short circuited until f.now() is called.  In Timed mode, the execution of a function is automatically performed x milliseconds after the first call to f() and then everything is reset.  This ensures that execution happens based on some period of time.

wait is used to defer execution of one or more functions which are passed as parameters to wait.  Once all those functions have been executed, regardless of results, the bound function of wait is executed.  In the following example f.wait(g,h,i) wait allows a developer to defer execution of function f() until g, h, and i have all executed.

Combination Enhancements

The other aspect of functionary.js is to provide some simple ways to combine functions with other functions in a clean and concise manner.  Sure, it is possible in unmodified javascript to combine functions, but functionary gives you a more clear way to do this.

At the core of this group of functions is bundle which takes the bound function and combines it with the passed in function(s) to produce one single function.  Think of it as taking function f and function g and bundling them together into function h.  Calls to function h then execute f and g in that order.  While in some cases it’s easy enough to just call f and g directly, in others being able to pass a single function like h can be incredibly useful.  Consider this:

var f = function() {};
var g = function() {};
var h = f.bundle(g);
h.defer();

The last line of this code ensure that f and g run together, but after being deferred.

Other functions include chain, before, and after.  These functions are all, in essences, some form of bundle above, although with different orders of execution.

functionary.js provides two new functional wrappers, completed and failed.  completed ensures that the passed in function will only execute if the bound function returned without an exception.  failed is the converse and will ensure that the passed in function will only execute if the bound function DID return an exception.  These two functions offer a unique approach to try/catch paradigms.  Coupled with bundle, a try/catch/finally model is easily possible as shown:

var compl = function() {};
var fail = function() {};
var fin = function() {};
var f = function() {};
var h = f.completed(compl).failed(fail).bundle(fin);

Finally, functionary.js offers a wrap function akin to sugar.js and prototype.js where once can wrap the passed function around the bound function.

Documentation

Please see the documentation on github for all the details of using these functions.  Also, play with the code, it probably won’t bite.

Installation and Details

You can get functionary from github.  I encourage you to check it out, fork it, and submit changes.  The ultimately goal is to make functionary the go to polyfill/add on to JS for functions.

 Thoughts, Comments, Suggestions?

If you have thoughts, comments or suggestions, please put them on github or contact me on twitter at @areinet

permalink: http://www.arei.net/archives/254
Adding Last Beer and Last Tweet to a Website

So, I updated the site today adding two new features that I have wanted to add for a long time: Last Beer and Last Tweet.

Last Beer is a peek into the last beer that I consumed and checked into on Untappd.  Don’t know about Untapped? It’s basically foursquare for beer and if you aren’t using it, you are really missing out.  I’m a big fan and regularly use it to track my beer consumption and history.

Last Tweet is, as imagined, the last thing I tweeted, retweeted or replied on twitter about.  It’s about as basic as it comes.  I am a huge twitter fan/user and if you are not, you should be.  Twitter is where it’s at and Facebook is incredibly lame.

The really cool thing about these two features is that they are both built and running on node.js.  What’s more, Last Beer uses the Node Module node-untapped which I wrote back in march as my very first node module.

I will say that getting node.js up and running in my hosting environment was a big PITA, but only because my hosting environment in a shared hosting and running server processes is a big no no there.  I ended up having to buy a VM and run it there.  That meant a fair bit of research and what not on which service was better and cheaper and all that, but I finally settled on one in April.  Once the server was provisioned getting node.js running was trivial.  It’s really been a pleasure to work with once you figure out what you want.  Perhaps I will blog about the experience in more detail at a later time.

http://untappd.com/user/arei

permalink: http://www.arei.net/archives/247
Why Dependency Management Pisses Me Off

Yes, it’s true. Dependency Management Pisses Me Off. Jason van Zyl over at Sonatype needs to be kicked in the groin… repeatedly. (Sorry, I don’t really know Jason and it’s not nice to say such things, but I wanted to really hammer home the point. Jason, I apologize to your testicles.) Seriously, when did we become so amazingly lazy that saving a JAR file into our SVN repositories became a big deal?

Now, I don’t want you to just think I’m some raving lunatic out there on his soap box shouting into the wind despite the accuracy of this picture; I want to at least pretend that you are not going to scream “TL;DR” and actually read the damn posting, so here is why Dependency Management pisses me off. Feel free to reply back to me on Twitter (@areinet) and I will engage you in some spirited debate. And I promise I won’t hurt your testicles in the process.

1). Dependency Management adds unnecessary complexity. Are we not just talking about saving some files into our SVN repository after all? Why is that so hard? And who on earth thinks that writing and changing a pom.xml file is actually easier than this? Also, there are people who would say that you shouldn’t be committing built objects into SVN (or whatever) and that we shouldn’t waste disk space. To these people I say this: Disk space is cheap. Seriously, the going rate for a HD is around 9gb per $1 USD. I assure you, no matter how many JAR files your project needs this is incredibly cheap.

2). Dependency Management puts someone else in your build loop. When I build a project, I want to rely on as few people as possible to fuck things up. Yet Dependency Management injects a completely incalculable third party into your system, and that’s just for one dependency. Sure, that dependency is always there, but with external dependencies, your are practically begging for your build to break because John Bozo three countries away from you removed six bytes of code from that one project you were relying on. Now, of course, you shouldn’t be using LATEST in your dependencies, but I really don’t want to rely on the fact that our build people are smart enough to realize this. If I just committed the version I wanted to the repository, none of these problems happen.

3). Licenses Change. When your build person goes out there and changes a version number, do you think they actually read a license file? Let’s assume for the sake of reality that people are incredibly lazy… now, do you want to take the risk that so and so actually read the license? And that the license didn’t change? Seriously, it would take me about six seconds to change the license on some sub-project you use and then commit it. And suddenly the sub-project owns all of your IP simply because you used them. Now, the legality of that is a debate for other scholars than I, but it could certainly cause a mess. The only thing stopping a sub-project from doing this,is the hassle of suing your ass into oblivion… And we all know that people are getting more and more litigious every day.

4). The Internet is, by it’s nature, unreliable. Do you really want to rely on the fact that the internet providers upstream from you are not going to screw something up right when your absolutely must deliver build has to get run? Seriously, the more people that have input into a process, the more likely that process is to get derailed. I do not want to think that my ability to deliver is dependent on whether or not Anonymous is going to cause a world-wide outage in protest over SOPA (which sucks by the way). Sure, you can run a mirror of x repository and spend your time maintaining that as well, but wouldn’t you rather spend time, oh I don’t know, outside? With a girl? playing WOW? Doing anything else?

So the point here is this and this is the TL;DR for you lazy people as well… Dependency management adds both complexity and unpredictability to your systems and this is not a good thing. A Build process is about Rigor, and Dependency Management is antithetical to rigor. By using a Dependency Management solution you are willingly signing up for problems and extra work. Who wants that? When given the choice between that and just storing the files I need into my repository, I will choose the latter every time.

Now, I do think some dependency systems are way better than others. The node.js NPM system is amazingly clean, but it’s still begging for the problems I outline above. So, maybe not that awesome. It is easy to use though, wish Maven were half that easy.

So, that’s it. That’s why every time my coworker comes in and raves about how awesome Maven is I just point at his crotch and start laughing. I mean, really, Maven? Awesome? You got to be an idiot to think that. (My apologies to my coworkers.)

permalink: http://www.arei.net/archives/246
My Ideal Job

Lately, I’ve been asking myself if my current work role is really the best use of my talents.  But shortly into wondering about the answer to this question, I had formed an even more important question: What exactly do I consider the best use of my talents.  So, here, for better or worse, is where I think the best use of my talents lies…

First and foremost, I’m a hacker type through and through.  A work day in which I am not writing code is a terrible day for me.  A work day in which I write a little code is a terrible day for me.  A workday where I am heads down, balls to the wall buried in code and completely oblivious to the passage of time.  Ding!  Awesome day.  What’s even better about those kind of days is that when I’m in the zone like that (Interface Designers call it “Flow”) I am disgustingly prolific. I mean oodles and oodles of code being churned out.  That’s a win for not just me, but the company for which I am working.

Next, I am a very creative person.  This means I get strength and energy from creative outlets.  I am not going to be your go to guy to write that piece of software for which you have painstakingly provide pages and pages of detailed specification.  No, I’m the kind of guy you come up to and say, “Hey, I had a friggin awesome idea, can you whip something together for me to test the idea out?”  I will take your crazy ass idea and run with it.  Now, the result here can be mockups or it can be straight to code, I’m comfortable either way, although I think I’m more productive in code, but whatever.  Just give me an idea that I can contribute to and put my own spin on and I will exceed your most wild expectations.

Also, I love writing reusable components and libraries.  Recently Jacob Thornton an engineer at Twitter (@fat) shared a tweet at JSConf 2012 that I really found interesting.  He tweeted, “new interview question: you have 45 minutes to write JQuery from scratch. get as far as you can. start from wherever you’d like.”  I absolutely loved this question, not just because I think it would really separate the wheat from the chaff as it were, but also because I would love that challenge.  Ultimately @fat concluded that anyone trying to answer that question would be screwed because there’s so much depth to JQuery, but I would absolutely love to try.  I could spend a lifetime reanswering this question over and over and over, getting it more and more perfect each time.  I have been fortunate in my career that the work I am asked to do more often than not has limitations that prevent us from using certain libraries.  I get to go in, study those libraries and then recode them for my project.  It’s quite enjoyable and amazing enriching.

Finally, I love sharing my knowledge with others and learning their ideas and knowledge as well.  Mentoring to me can be quite a lot of fun and there is nothing better, IMHO, than a willing and eager student/peer that wants to learn or wants to debate.  I love that sort of thing.  The caveat, though, is that these activities must not take away from the above two activities.  I like mentoring some of the time, but when it becomes a full time job, when I start managing, that’s where I lose interest.  Surprisingly, I am extremely good at managing and have had a number of management position in the past, but ultimately, I have no interest in doing it in the future.

So, all that said, my ideal job is Hacker and Evangelist of Prototype Libraries. Now, I know that’s probably not a real job (if you think differently, please send me an email!), but that’s what I would ideally like to be doing. And it’s pretty cool that I’ve figured it out.  I now have a benchmark by which I can hold up two jobs and ask, “How much does each of these jobs approach the ideal for which I have set myself.”  That’s the job I’m more likely to take, that’s the job I want.

So, current job, how much do you think you live up to my ideal?

permalink: http://www.arei.net/archives/245
node-untappd: A NodeJS Library for using the Untappd API

Last night in anticipation of the upcoming JSConf, I released version 0.1.0 of node-untappd.  node-untappd is a NodeJS API Client for using Untappd services. My hope is that someone out there might use this to make some really cool things that they will then share with me and offer to buy me beer for my hardwork.  We’ll see how that works out.

If you are unfamiliar with Untappd and drink beer at all, you need to make yourself familiar.  Untappd is a socail tracking application for beer.  Think of it as FourSquare for beer.  Users checkin, rate, comment, track, and share beer consumptions and notes.  It’s an awesome little tool for keeping track of exactly what you are drinking, where you are drinking it, and what you thought of it.

node-untappd connect into the Untappd application by exposing the Untappd API to your Node application.  You can do all kinds of queries, checkins, comments, and the like right from within your own tool.

You can install node-untappd by using

npm install node-untappd.

Make sure to read the README.md file

To learn more about node-untappd or see the source code at the github repository: http://github.com/arei/node-untappd.

To see the API details go to: http://untappd.com/api/docs/v3.

Enjoy!

 

permalink: http://www.arei.net/archives/240
Easing NodeJS into the Future

Let me get this right out of the way. I’m a fan. NodeJS is really cool, easy to use, and feels just right the minute you try it out. I’m all in.

Now for the bad news…

There are some problems that NodeJS is facing this year. The upside though, is that these problems can be easily addressed. I only hope it is not too late… but maybe with a little effort, a little organization, and a whole lot of additional groundswell, we can propel NodeJS forward by a giant leap.

Hello, World

The very first time programmatic encounter a new user of NodeJS will have is the Hello World example. This is a universal concept across all languages, the entry point is always Hello World. Hello World is the single most simplistic concept in writing a program in any language. It is fundamentally the most basic thing you can do.

The NodeJS Hello World example:

var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');

The problem is that the NodeJS Hello World example basically says that NodeJS is all about building Web Servers and in my opinion that is way too narrow. NodeJS is absolutely awesome at the Web stack, no disagreement. Yet, I believe that NodeJS is so much more and has nearly limitless potential: today we can build CLI programs, statistical analyzers, and stand alone applications all in NodeJS and not once do we need to create a Web Server to do it. By defining our most basic of examples in the terms of a Web stack we are defining our entire ecosystem in those terms and that will continue to limit our potential. It is time for NodeJS to grow out of that perceptual constraint and I believe it starts with Hello World.

My solution, one line long:

console.log(‘Hello World’);

Once you get users in with one basic example, you move on to the next one and the next one after that. Absolutely we should be teaching our n00bs how amazingly simple it is to host your own servers in NodeJS, but NodeJS is so much more than that. We need to appeal to all the use cases, not just those who want to build servers. So show us how to do it all.

Which leads to the next point…

Documentation

One of the big problems of the documentation is that it is trying to serve two different masters and it need to separate it’s interest. On the one hand, it wants to be a teaching tool, helping new users through common problems and examples. On the other hand, it needs to be a resource document to which the more experienced users can turn. Regardless, both of these objectives is utterly necessary, but also at odds with one another. So lets split them apart but keep them cross referenced with one another. So the reference has pointers to the examples and the examples cross link to the references.

Once we have separated the concerns from one another, then the community can put some real effort into beefing the documentation up big time. With regards to the examples, we need to put some serious effort into teaching our users event driven programming and how it works, why it works, where it is good, and where it is bad. On the reference material side of the house we need to flesh stuff out: every single function, identifier and object needs to be described and commented upon. Also every callback needs to also be defined with exactly what is being passed into the callback when it is fired. I swear I spend half my time outputting arguments from various callbacks to understand what they are before I can actually use it.

There is literally thousands of examples on the Internet about how to use NodeJS and thousands of people willing to share their insights. So let us put all that collective talent to work creating an amazing system dedicated to teaching the technology we are all so passionate about. Everything from video tutorials (like that one on youtube), to cross referenced API documentation, to examples to do virtually everything we can think of, to sample applications and configurations. There needs to be a one stop shop to all things NodeJS and it’s name is not Google. I want to know how to do X and I do not want to hunt all over the place to find it. It is all about making NodeJS more approachable and easier to use.

And with that segue we move on to…

Startup

Simple NodeJS startup is wonderfully easy, just type node and go. Or you can get more fancy and supply a filename to start execution. The reality, however, is that most of us are not working in a simple world. We work in complex, custom, evolutionary, hybrid environments that defy description. I know this first hand, I’ve tried to describe them, it’s not pretty.

So we need to make starting NodeJS easier. After all, if it seems like its hard to do (whether or not it really is) we are not going to do it.

See, System Administrators today have a lot invested in their current solutions. They’ve been using Apache Web Server (for example) for almost two decades. They have put a ton of work into whatever cobbled together solution they have. Asking them to change, while great for progress, is just asking for a whole lot of argument. So why not make NodeJS fit into the architecture they already know and love? Why not provide them options and at the same time, show them how easy it is to love NodeJS.

This comes down to three different ways NodeJS needs to run:

First, some people just want to run NodeJS. We got this one covered today. It’s easy, it’s powerful and it has a lovely command line interface built in if you want it.

Second, some people want to run NodeJS from a Web container. This is basically the PHP or the JSP model where NodeJS runs behind a Web Server and the Web Server sends specific request too the NodeJS as it needs. There are dozens of protocols for doing this (CGI, FastCGI, AJP, etc) and implementing several of these should be and is pretty easy. A few github projects do this to varying levels of success. The ultimate goal though, is to bring these things right into the runtime so things are as painless as possible to setup. It could be as simple as just adding a command line switch to the NodeJS runtime to tell it that the incoming request is a CGI one or something. I am not trying to implement here, just throwing ideas out.

Finally, some users just want to run NodeJS as a robust service. For example, anyone whom wants NodeJS to be the only Web Server and does not need to rely on third part tools. To do so, NodeJS needs to ship with the code and tools necessary to working with full fledged services. Upstart and Forever are two tools to help with this, but why can we not put this technology into the runtime? This speaks to making it as easy as possible to get setup and rolling the way a user wants to get setup and rolling. And let us not forget not everything runs on Linux like it should; Windows still has its proponents and we need to be more approachable to everyone. Ideally integrated technology for keeping a server up, running, and monitored would be ideal. As more and more NodeJS users look to deploy NodeJS into production, easing this process becomes more and more critical.

And I’m Spent

So that is about all I got so far. I’m sure there are dozens and dozens of other things that could really help NodeJS grow, but to me these are the big ones. Yet, these are also the ones that I think can be fixed right now.

Ultimately, we need to make NodeJS more approachable, more understandable and more reliable. Today NodeJS is crazy popular, but I believe we are rapidly approaching a turn which can direct NodeJS’ fate for the years to come. It is the classic dilemma for any fledgling technology and the roadside is strewn with the corpses of those that have come before us. I honestly believe that this community can steer NodeJS to greatness, if it is willing to do so. This involves hard work, it involves decisive action, and most importantly, it involves foresight to see what is to come. If we, as a community can accept this role, NodeJS will explode to heights even we failed to imagine.

permalink: http://www.arei.net/archives/239