en https://static.feedpress.com/logo/walkercoderangerblog-5c87e34781389.png Walker Code Ranger Blog Walker Code Ranger Blog 2024-09-05T04:18:12+00:00 https://WalkerCodeRanger.com Jeff Walker Code Ranger Jeff@WalkerCodeRanger.com Learning Rust Modules (Talk) 2015-10-31T00:00:00+00:00 db48173b-2e59-4a7f-a995-c5447598f012 My earlier post on “Learning Rust Modules” prompted the Columbus Rust Society to invite me to talk on the same subject. This is a recording of the talk I gave on 2015-09-23.

]]>
CSS is Not Composable 2015-09-17T00:00:00+00:00 4b0a9a55-04c0-4d7d-952c-03cb8120e392 I’ve been a .NET web developer for more than 12 years now. In that time, I’ve had quite a few projects at a mix of consulting gigs and various jobs. Generally, I am a full-stack developer meaning my work ranges all the way from the database, through the business logic and into the front-end html, CSS and JavaScript. The projects I work on are typically line of business web applications, though I spent a little bit of time at an interactive firm and my current job has me working on a public facing CMS and LMS. None of the projects I have worked on ended up with clean clear style sheets despite a variety of approaches to them. Inevitably, they started out not too bad and slowly devolved into a horrible mess.

My CSS Background

Before everyone tries to say that the teams and I just didn’t understand CSS or know how to use it. Let me say I am quite knowledgeable about CSS in regard to both technical and best practices respects. I understand the box model, selector specificity, floats and clearing, CSS 3 selectors and all the rest. I’ve read extensively online about CSS best practices on sites like CSS-Tricks, A List Apart, Smashing Magazine and many others. I’ve tried to understand OOCSS, SMACSS and BEM (which wins the award for ugliest naming convention). On different projects I have tried different approaches, including “anything goes”, semantic CSS and most recently trying my own flavour of OOCSS. I’ve recently gotten extensive experience with Bootstrap. I think it gives a good insight into how a lot of real world CSS is done. With all those approaches, the CSS becomes a mess and a pain. I’m open to the idea that I just haven’t found the “right” way, but I would argue that, after all that research and experience, the problem isn’t with me. It is either with the technology or the community’s inability to explain the right way to use it.

I used to be a true believer in semantic naming. In practice, that often means content based naming and lots of words from the business domain in the class names. However, that approach leads to many separate classes that have the same styling. Recently, I’ve been working really hard to try and create functional class names. That is quite difficult because there just aren’t enough page layout terms for all the classes and design elements that are needed. I found this approach eliminated the duplication of styling on many different classes, but there was a deeper problem. It was now very difficult to get things to layout correctly without littering the HTML with tons of classes, many with vague, easily forgettable names. In my reading, it seemed like this wasn’t supposed to be a problem. In fact, the goal was allow composable layout classes. I read that:

[…] a latest news module found in a sidebar should not be defined by its current place in the sidebar. It should be movable to a main content area, another module, the footer, so on and so forth.

Yet, I found the exact opposite was actually the case. When I used the OOCSS approach I ended up with design elements that only worked in one context and I either needed a different design element or lots of extra classes to express each of the different places it could appear. After more research, it seemed I’m not the only one who found this. Ben Frain expresses the same difficultly in his post “OOCSS/Atomic CSS are Responsive Web Design ‘anti-patterns’”. Now, to be clear I don’t think this is the fault of OOCSS. I think this is an inherent issue with CSS.

A Real World Problem

As an illustration, on my current project I was attempting to follow my new OOCSS like approach and found myself stuck. This was the issue that clarified for me what was wrong with CSS. The problem wasn’t that I couldn’t come up with any classes and CSS that would layout the page the way that was needed. It was that there was no way to make what I thought was a reusable CSS “object” that works when combined with various other design elements. It seemed to be necessary to have classes and styles for each specific context. In an attempt to learn what I was doing wrong, I posted the problem to Stack Overflow, asking what the “CSS Best Practices for decoupled modules” are? I worked hard to clearly express that the problem wasn’t that I couldn’t create classes and styles to meet my needs, but that I couldn’t do so in a composable way. I think I succeeded in expressing my question. In the next section, I’ll show a variant of the issue I used in my question. First though, let’s look at the answers I received.

There was only one response. I do appreciate the answerer taking the time to write up a lengthy response. It helped to further my understanding of the reality of CSS. However, the answer amounted to “that’s not how CSS works”. Additionally, he went on to describe a number of “best practices” that were just an explanation of how CSS works. It is a pet peeve of mine that the CSS community seems to be the only development community that thinks describing how the technology works is a description of best practices. In every other development community I have been involved with, best practices are those approaches and procedures an expert in the technology will find most effective. They assume the audience is already aware of how the technology works but that they need guidance on which of the many ways of using that technology would be most effective.

Modules that aren’t Modular

The example I used in my stack overflow question was an instance of what many designers call a “module”. A module is any design element that has a standard box appearance, but can contain a variety of contents. A common example of a module is the aside. Typically asides are presented as floating boxes of a fixed width with certain margin and padding. Most commonly asides contain paragraphs of text. However, some asides might also contain lists, quotes, floating images or even a little form (for example, an aside might provide a contact form).

The problem arises with the bottom padding of the module. The designer wants to ensure that there is always at least so much padding. However, some elements have their own bottom margin. That margin combined with the padding causes too much white space at the bottom of the module. We can’t simply adjust the padding so the total white space is correct because the module can contain different elements with different bottom margin. This is a common enough problem that CSS-Tricks describes 7 ways to solve the problem in their article “Spacing The Bottom of Modules”. The last solution they land on is to remove the margin from the last element in the module and its last children like so:

.module > *:last-child,
.module > *:last-child > *:last-child,
.module > *:last-child > *:last-child > *:last-child
{
  margin: 0;
}

However, that isn’t really composable either. The problem is that the last child is the last child in the DOM, but may not be the last child visually. This can happen when there are hidden or floating elements. The visually last element can even be dynamic. Consider a module containing a floating element in a fluid responsive layout. The floating element needs left and bottom margin to provide space between it and the wrapping text. When the viewport is narrow, the text may be the element in contact with the bottom of the module. However, as the viewport gets wider, the floating element may become the element in contact with the bottom of the module. Then the space between them will be the sum of the module padding and the floating elements margin. That is more space than desired. One could use a media query to remove the bottom margin of the floating element at this point. However, because the width this occurs at is dependent on the exact size and position of the floating element as well as the exact text and font, this would have to be done on a case by case basis.

Example of floating element in module at different viewport widths

This is just one example of situations like this. There are many other ways this happens. As another example, imagine a responsive grid layout. It could be the case that at each breakpoint a different element becomes the visually last element in a container and each one has different margin. Correcting that requires a very complex set of breakpoints in one’s style sheet. There is no general, composable, way to deal with these issues. The only approach I can find is to deal with them on a case by case basis on each page or particular arrangement of elements.

What’s My Container Width?

Lest you imagine this problem is limited to margin and padding, consider the humble article image on a fluid site. Imagine in our design that at larger viewport sizes the image should float to the right with text flowing around it with a certain amount of margin all the way around. Of course, at some point as the view size gets smaller, we will want to stop floating the image and change to a centered block. That should be simple enough. We can use a media query to change the layout at a certain breakpoint. If we know the width of the article area and the width of the image we can easily decide on a reasonable breakpoint that prevents the text area to the left of the image from being too thin.

However, what if those two implicit assumptions don’t hold? Imagine that, on our main page, articles are displayed 100% width, less some margin on the sides. On category pages, articles are displayed at 80% width to allow for a 20% navigation bar. Now the width of the article is context depended. We can imagine more complex site designs with more than two different article width contexts. Alternatively, imagine that the images in different articles aren’t all the same width. We’ll set a max width and height so they never get too big, but depending on the image size and aspect ratio we could need a different breakpoint for every image. What is really needed is the ability to change the style based on the size of the container or better yet, based on the amount of space between the element and the edge of its container. That is how a designer would express the rule. He would say, when the space for the text next to the image is less than a certain amount, rather than floating the image center it above or below the text.

CSS is Not Composable

I believe the root of the problem with CSS is lack of composability. There are minor annoyances like, the lack of variables which can be addressed by preprocessors such as Less and Sass. Yet the challenge of CSS remains. If CSS was composable, it would mean design elements could be placed next to one another or nested or combined on the same element without unexpected consequences. Without requiring special CSS for the particular combination of design elements or particular page. The need for CSS for each particular combination produces coupling. Design elements are no longer fully independent. This coupling, just as in object oriented design, leads to difficult to understand and maintain code that lacks flexibility. Composability produces flexibility because it enables rearrangement in a myriad of ways. I long for a web design language with that power and elegance.

]]>
Learning Rust Modules 2015-08-29T00:00:00+00:00 cb3eea41-a60b-4cba-bb46-9a6022c6c6f0 This post is based on Rust 1.2.0 and I am just starting out with Rust, it is possible I misunderstood something.

The other day, I sat down to write my first bit of Rust code ever. I was working on a simple kata and many other people would just whip up something with all the code in one file. However, working as a C# developer for many years, I am in the habit of organizing my code into namespaces and separate files. I was totally stumped for a while on how to do this in Rust. The documentation of modules wasn’t immediately helpful. I later figured out that certain key sentences did in fact explain how modules work in Rust. However, coming from a C#/Java way of doing namespaces/packages, they weren’t explicit and direct enough to flip around my thinking. Now that I’ve figured out how modules work, I thought I’d share what I figured out.

Review of Namespaces

C# namespaces and Java packages are really very similar. In this post I’ll focus on C# namespaces, because I’m a C# developer, and only mention Java when there is a difference. By convention, we organize and name our code files by the namespaces and classes they contain. But really there is no correlation between the file structure of the code and the namespaces and classes. We are free to name each file totally differently than the classes it contains and put it in a directory that has nothing to do with the namespaces in it. Indeed, we could combine our code in a single file if we so chose. When compiling, we are really providing a list of files to compile together. Typically our IDE hides that from us by either making a list of the code files in a project file, or simply assuming that every code file in a project directory should be compiled together. Although we typically don’t think about it this way, namespaces aren’t really entities, but are just a way of creating really long unique names for classes. So the class MyNamespace.MyClass could logically be thought of as being one long class name MyNamespace__MyClass with some compiler help to make it easy to refer to by figuring out the first part of the name (note, class names aren’t actually changed this way). In fact, in the compiled code namespaces exist only as long names for classes and every reference is fully qualified with its namespace. With that refresher, let’s look at how Rust does modules.

Rust Modules

Rust modules do not work like C# namespaces or Java packages. First, in addition to organizing classes like namespaces, they can also contain static variables and functions . In this way modules are similar C# static classes or to a Java class containing only static members. Second, Rust modules aren’t unrelated to code files the way namespaces are. When we put all of our code in one file, they look very similar. For example, if we were implementing a math library, some of our code might be:

pub mod number
{
pub struct Complex<T>
{
r: T,
i: T
}
pub struct Vector<T>
{
x: T,
y: T
}
}

pub mod trig
{
fn sin(x: f64) -> f64 { unimplemented!() }
fn cos(x: f64) -> f64 { unimplemented!()}
}

However, if we would like to spilt our code into multiple files then they work differently. Rather than declaring the module in each file, we declare the module without a body in the parent Rust file and the compiler “includes” the code of the corresponding file. The pulling in of the other code file feels like C style #include although it isn’t being handled by the preprocessor, so it isn’t a simple textual include with all the gotchas that come with that. The Rust compiler will look in two places for the file to include for a module. It will look in the directory of the parent code file for module_name.rs or module_name\mod.rs. Thankfully, having both files is a compile error. The second form, using a directory, allows one to have further modules nested inside the given module. I imagine most people will use the first style whenever possible. When compiling Rust, you are really only providing one main file to compile and it finds the rest by including them as modules. If we split our math example up using this technique, we could get:

// lib.rs
pub mod number;
pub mod trig;
// number/mod.rs
pub struct Complex<T>
{
r: T,
i: T
}
pub struct Vector<T>
{
x: T,
y: T
}
// trig.rs
fn sin(x: f64) -> f64 { unimplemented!() }
fn cos(x: f64) -> f64 { unimplemented!()}

Notice that lib.rs contains the declarations of the number and trig modules. This is what actually creates the modules. The other files contain no reference to the modules. Their contents are in the module by virtue of the Rust compiler including them because of the module declarations in lib.rs. I’m trying to withhold my judgement, but coming from a C# background this just feels weird.

One Type, One File

If you’re a C# developer looking at this, you’re probably thinking “but Complex<T> and Vector<T> are still in the same file, how do a separate them?” It is actually not possible to spilt a module across files in Rust the way one can split a namespace in C#. So to put them in separate files, they would need to be in separate modules. However, there is an interesting workaround for this. Notice that we specified our modules to be public using the pub keyword. C# namespaces aren’t public or private, only the classes in them are. Rust also allows its equivalent of the using statement, the use statement to be public or private. When use is private, it functions very much like a using statement. When it is public, it allows one to incorporate stuff from one module into another module. These features allow us to put types into their own files and hence own modules, but expose them to the outside world as if they were together in one module. Doing that to our math example would look like:

// number/mod.rs
mod complex;
mod vector;

pub use number::complex::Complex;
pub use number::vector::Vector;
// number/complex.rs
pub struct Complex<T>
{
r: T,
i: T
}
// number/vector.rs
pub struct Vector<T>
{
x: T,
y: T
}

Notice that in mod.rs the two modules are not declared with pub. They default to private, so they are not visible as sub-modules. Then we use public use statements to include Complex and Vector from their sub-modules into the number module. Now we can use them as if they were in the number module.

Crates

Up to this point, we have talked about modules within your own code. However, it won’t be long before you want to use modules written by other people. These are packaged in what Rust calls “crates”. A crate is a library or executable and is the equivalent of the .NET assembly concept. The closest equivalent in Java would be .jar files. They are also the unit of publishing packages like Nuget packages in .NET. You can browse the published crates at crates.io. If you wanted to use the primes crate (to pick a random example). Simply add to your Cargo.toml file the lines:

[dependencies]
primes = "0.2.0"

Then from the code you import the crate with:

extern crate primes;

This exposes the contents of the primes crate to your code in a module named primes.

Something important to know, that isn’t stated in the docs right now, is that many crates have dashes in their names, but dashes are not allowed in module names. When using one of these crate you use the name with dashes in the Cargo.toml file like media-types = "0.1.0". However, when referencing the crate from code, replace the dashes with underscores. You can also use as to bring any crate in as a different module name.

extern crate media_types as mt;

Note that the dash in the crate name is replaced with an underscore to give a valid module name. This also shows bringing the module name in as “mt”. Older versions of Rust handled this differently, so you may see some examples with the old syntax that uses double quotes.

Modules are more than Namespaces

Earlier, I mentioned that modules, unlike namespaces, can contain static variables and functions. In this respect, they are like C# static classes or a Java class containing only static members. All of these have a lot of similarities to an instance of the singleton pattern. We have already seen an example of static functions in a module in the sin and cos functions. Now, let’s looks at static variables. They enable a module to have and keep state which can be private to the module. Static variables are declared with the static keyword very much like let.

static NAME: &'static str = "Jeff";

Notice the special lifetime 'static that all static variables have. In addition to the special static lifetime, they have a number of restrictions that let bindings do not. Their type must always be specified. Accessing a mutable static is an unsafe operation because other threads could be modifying it at the same time. Their value must be initialized to a constant expression. Finally, they can’t implement Drop. These restrictions ensure thread safety and avoids the complex issues that C# and Java have with static initializers and clean up of statics on program exit.

Paths

I’ve already shown several examples of the use keyword. It is important to note that it brings into scope the final name in the path. That is the name after the last pair of colons. This differs from the C# using keyword which brings in all the contents of the namespace listed after using. Given the ability to re-export names with public use statements it makes more sense for it to work that way. Like with using statements, the module path of use is always relative to the root module. However, when using paths other places in code they are relative to the current module. There are a number of useful options available for the use statement. I encourage you to read about them in the docs for use. In addition, you can use the as keyword with use in the same way as with extern crate.

Edit 2015-08-30: Corrected explanation of dashes in crate names to reflect current version. Added “Paths” section based on feedback from the Rust subreddit.

]]>
Unleashing C# Generic Constraints with Extension Methods 2015-07-01T00:00:00+00:00 6d6a1e2b-ec64-4b1b-b9e7-d287dbe651ca A number of of times in my career, I have found my work frustrated by the sometimes arbitrary limitations placed on generics in C#. The addition of generic covariance and contravariance was a big step forward in that regard. Still, there remain many frustrating limitations. The fact that you can’t use Enum or Delegate as a generic constraint can be worked around using packages like ExtraConstrains.Fody or UnconstrainedMelody. However, extension methods also provide a little known way of working around some limitations. It is so little known that I couldn’t find a blog post that discussed the technique while working on this one (though I think I recall reading one once). Indeed, these over 2 year old stockoverflow.com questions “Is it possible to constrain a C# generic method type parameter as ‘assignable from’ the containing class’ type parameter?” and “Constrain type parameter to a base type” had no answers showing this approach until I answered them while writing this.

An Example

Imagine we create a generic pair class that we will use any time we want to deal with a pair of values that may or may not be of the same type.

public class Pair<TFirst, TSecond>
{
public TFirst First;
public TSecond Second;

public Pair(TFirst first, TSecond second)
{
First = first;
Second = second;
}
}

Then we might find that we sometimes want to know if the two values are in order. So we imagine we could write a method to tell us if that is the case.

public bool InOrder()
where TFirst : IComparable<TSecond> // Doesn't compile
{
return First.CompareTo(Second) <= 0;
}

We’ll quickly realize that this code doesn’t compile at all, because we aren’t allowed to add generic constraints to a non-generic method.

Another time, we think it would be useful to be able to apply a function to both values. So we attempt to write the apply method. We will need to constrain the type of the function to accept both the first and second value.

public Pair<TResult, TResult> Apply<TValue, TResult>(Func<TValue, TResult> func)
where TFirst : TValue // Doesn't compile
where TSecond : TValue // Doesn't compile
{
return new Pair<TResult, TResult>(func(First), func(Second));
}

But again, we are thwarted by the compiler. In both cases, the underlying issue is that you can’t further constrain the type parameter of a class from a method.

Lastly, we might imagine it would be nice to have a method that swapped the first and second value. Of course, that only makes sense when they have the same type. It is very difficult to envision how one might write this method as there is no generic constraint for type equality in C#.

Extension Methods to the Rescue

In all the above situations, an extension method can easily be used to work around the limitations of generic constraints. That looks like:

public static class PairExtensions
{
public static bool InOrder<TFirst, TSecond>(this Pair<TFirst, TSecond> pair)
where TFirst : IComparable<TSecond>
{
return pair.First.CompareTo(pair.Second) <= 0;
}

public static Pair<TResult, TResult> Apply<TFirst, TSecond, TValue, TResult>
(this Pair<TFirst, TSecond> pair, Func<TValue, TResult> func)
where TFirst : TValue
where TSecond : TValue
{
return new Pair<TResult, TResult>(func(pair.First), func(pair.Second));
}

public static void Swap<T>(this Pair<T, T> pair)
{
var temp = pair.First;
pair.First = pair.Second;
pair.Second = temp;
}
}

Notice in the swap method how we are able to specify the first and second types as equal by simply using the same type parameter for both.

Not Perfect

This approach sometimes leads to situations where the type parameters can’t be inferred and it is necessary to specify duplicate type parameters since there is no way to specify one type and have another inferred. It also doesn’t address other limitations of generic constraints, like the fact that you can’t specify a constructor constraint with parameters. Still, I hope this will be a useful trick you can add to your toolbox.

]]>
Advice for Open-Source Projects 2015-06-24T00:00:00+00:00 e6f11d4f-f9c9-4c76-ac7d-9f5877c6544b Recently, I’ve been comparing JavaScript build tools and their plug-ins. At the same time, I’ve been checking out static site generators like Jekyll, Hexo, Middleman, Metalsmith, DocPad, Champ, Assemble and others. For a more complete list, see StaticGen.com and this survey of .NET static site generators. However, I don’t really want to talk about static site generators today.

What I want to talk about is all the things that go into an open-source project besides the idea and the code. Now of course, one should have a cool idea for an open-source project and someone is going to have to write the code. I absolutely care deeply about quality, clean code. Yet with an open-source project there is a lot of other stuff that goes into it. Often times, it is all that other stuff that makes an open-source project great and gets people to use it. Now I can’t promise you that if you do all these things people will use your project, but it will certainly help.

All Open-Source Projects

There were a couple things I saw that we all need to do better at, regardless of whether one manages a massive project with hundreds of contributors or just wrote some code in a hour and posted it on GitHub.

Clearly Indicate the Status of the Project

The most important thing you can do with any code you post online is clearly say at the top of the read me or other most visible place, the status of your project. This should answer questions about the current state of the code and about the likely future path of development. Here are some ideas for statuses one might use. Though a couple sentences of explanation in addition will always be helpful.

To describe the state of the code:

  • Works for Me or No Warranty: Indicates this is some code you threw together, it solved your problem or met your goal, but there has been no effort spent making sure it is suitable for others to use.
  • Sample, Example or Starting Point: The code demonstrates how to do something, but shouldn’t be taken as production code. It should probably be incorporated into other projects as their own code that they will own and further develop in the future.
  • Pre-release or Beta: The project has gone through multiple rounds of active development with a goal of reaching a stable release version, but is not there yet.
  • Stable/Release: The project has reached a stable point where there should be a minimum of bugs and the necessary features are present. Use this even if there is a previous stable release, but you are working on a new version that is in beta.

To describe the future path of development:

  • Active (with Date): This indicates people are currently actively contributing and addressing issues and questions on a semi-regular basis. The current date should always be included with an active status and updated at least twice a year so that people know it is true and not just a statement that hasn’t been updated correctly.
  • Inactive: The project is not currently being maintained, but there are still people who would notice if someone reported a horrible bug or submitted a pull request. No guarantee is made that they would be addressed though they would at least get a reply.
  • Hiatus: The project is currently inactive, but there is an intention that it will return to an active status in the future. An idea of the time-frame is helpful here.
  • Ignored: This code is not being maintained, issues, questions, bugs and pull requests will not be looked at.
  • Legacy: The project was once active and may have had a reasonable community. There is still some work to fix important bugs and answer some questions, but no major features will be added.
  • Obsolete: The project uses technologies or versions of those technologies or approaches the owner and contributors think are obsolete and not worth further development. The code remains available for existing users or new users who are for some reason stuck on the old technologies. It may be helpful to combine this with a status like inactive, ignored or legacy to give a better indication of what users can expect.

You might imagine that large open-source projects with slick websites are exempt from needing to describe the project status because their website indicates the amount of effort being invested into the project. However, it is not uncommon to come across a beautiful open-source project website that was created four years ago and the site and project have been ignored ever since.

State a Clear Value Proposition

Explain in a short paragraph what your project is and does and why people would want to use it. Don’t assume other technologies you reference are known to the reader. They may have stumbled across your project even though they work in a totally different technology stack. When referencing other technologies, provide a link and a couple word description of what it is. Make sure it is clear what your project does. It is very easy for you, who knows the project inside and out, to not make this clear, because it is so obvious to you. Finally, include why someone would choose your project over other options. Try to avoid purely technical features like performance. Focus on features that are distinctive and not shared by most projects like yours. If you really want others to use your project, think of this as your sales pitch.

Committed Open-Source Projects

The above two things are what I think every bit of open-source code thrown up on the web needs to do. But what if you are actually wanting to start an open-source project that you hope people will use? Or what if you become a core contributor to a project you want to see succeed? The follow is my advice on how to make that project a success.

Recognize the Commitment and Make it

Creating and maintaining a successful open-source project is a lot of work. It is often as big as any project you might tackle at your job. Furthermore, you will be called on to do tasks you probably aren’t responsible for at work, like documentation and website design. There will be important bugs that if not fixed in a timely manner risk alienating your user base. There will be lots of questions and requests for help, many of which will be pretty “stupid” questions. All that will be done in your free time when you could be doing something else (unless you are one of the lucky few who is responsible for an open-source project at work). While you are spending your free time, there will be long stretches with little to no positive feedback. All that will take a lot of commitment. The most important thing you can do before starting an open-source project or becoming an important contributor to one is to carefully consider all the work it will entail. Don’t shy away from it, you need to face it upfront so when difficult days come you will know you anticipated them. If after all that, you still want to do the project. Then make a firm commitment to do the project and stick with it. The most successful open-source projects generally have a core team of contributors who were very committed for a very long time.

Don’t try to be All Things to All People

Given that this is an open-source project and not a product one is profiting from, it is important to limit the scope and focus on the important things. In large part, that means you don’t need to cater to every option and work style your users might have. It is good for your project to be opinionated and focused. If users don’t like those aspects, there are lots of other projects for them to choose from. It is much more important for the project to be excellent for a small dedicated user base. Trying to include as many users as possible often leads to a system that is incomplete and buggy. For example, the Pretzel project is trying to create a .NET port of the popular Jekyll static site generator. However, at the same time they are throwing in support for the razor template engine. Now their effort and focus is divided and they have twice as many things to test and document. I think this is leading to reduced quality. For example, when I last downloaded the project, the sample razor template site didn’t even compile. Once a project has successfully completed its core and has a growing user base, then it can expand to other functionality if it makes sense.

Docs Are as Important as Features

I see this all the time, a project is described in glowing terms and lists really cool features, so I go to use it only to discover there is no documentation except for one or two blog posts that barely count as an introduction. What the majority of potential users do at that point is look for other options. That is why your documentation is just as important as your features. Commit yourself to writing the documentation for a feature as soon as it is in a state where it is ready to be used. Also, don’t forget to update them as features are changed. That way the docs will always be up to date. The documentation needs to be a website or wiki or read me available on the web. If users have to download something to read the docs, a certain percentage of them won’t bother. Also, a bunch of blog posts don’t count as documentation. Blog posts are great for introducing new projects and features and for promoting a project, but they aren’t documentation. They can’t be updated to match the current version. It is incredibly frustrating to be referred to a blog post as documentation only to find that the API and features have changed significantly since then. Often there is no post describing the changes. Even if there is, it now takes twice as much work to read both posts and synthesize an understanding of the project.

Learn the Lessons of Your Predecessors

Typically there have been projects like yours before in your technology stack or in others. They have tried what you are doing and discovered the pitfalls and best practices. Look carefully at other projects doing what your project does. Make sure you understand how they work and why they work that way. Read through the issues they had. You might discover your exact approach has been tried but found wanting. You will likely save yourself a lot of time in the less core areas of the project which you haven’t been carefully thinking about.

A Slick Website Looks Professional

As developers we don’t value beautiful marketing material as much as we should. Even though your project isn’t a product you are selling for money, you are basically selling the use of it to your users. Having a slick website to sell the project gives people a good feeling about your project and makes it look very professional. If you can invest the time for a great website, then you must have the time to invest in a great project. The site doesn’t have to be large. Even a couple pages of description, features and contact information with links to documentation, code and downloads will go a long way.

Make Sure the Project Runs out of the box

If you have a sample project or example of using your library, be absolutely certain that it works immediately out of the box after users download and install it. If a user’s first experience with your project is that you can’t get the sample to work, they will be left with a very bad taste in their mouth. With each new version, test the sample. If possible make unit tests to cover as much of it as possible. Another way to fail in this area is to have the download instructions or link not updated so that they point at an old or broken version.

Practising What I Preach

Looking at all these open-source projects and realizing the important things I listed above, I realized some of my projects are failing in one or more of these areas. My next step is to go through each of the projects I have posted online and address these.

In summary, here again is my advice. The first two are for all open-source code posted online. The rest are for serious open-source projects:

  1. Clearly Indicate the Status of the Project
  2. State a Clear Value Proposition
  3. Recognize the Commitment and Make it
  4. Don’t try to be All Things to All People
  5. Docs Are as Important as Features
  6. Learn the Lessons of Your Predecessors
  7. A Slick Website Looks Professional
  8. Make Sure the Project Runs out of the box
]]>
The State of JS Build Tools 2015 2015-06-17T00:00:00+00:00 f9fe5876-39f4-4415-b829-464193718e04 I’ve recently been looking at JavaScript build tools because I am starting a project in AngularJS. So of course, I will need a build tool to compile, bundle and minify my scripts and style sheets. Another reason to look into these now is that Visual Studio 2015 will add support for task runners like Grunt and Gulp.

My starting point in this process was the “Gruntfile.js” created for me by the useful scaffolding tool Yeoman. Out of the box it came configured to use Sass with SCSS, while I prefer Less. So, I looked into what it would take to switch. The Grunt configuration, for what I considered a fairly typical build, was hefty, weighing in at 450+ lines. Digging in further, I discovered a number of things I didn’t like. That led me to evaluate other options. Moving on from Grunt, I looked at the next most popular option, Gulp. Then, I investigated the less common Broccoli.js and Brunch. I briefly considered the obscure Mimosa and Jake, but didn’t see anything compelling enough on their websites to make me think they needed to be considered over and above the others. This posts details my findings.

Criteria

When considering these JavaScript build tools, I bring a certain perspective and set of expectations. I’m a .NET web developer who has been using Visual Studio for a long time. In the past I have worked in Java and C++ and used a number of IDEs. I have some experience with a wide range of build tools including make, msbuild, rake, nant, and psake. That experience has led me to prefer a polished tool with lots of convention over configuration. To me, a build tool exists so I can work on my project and should stay out of my way. I should have to spend a minimal amount of time configuring and debugging the build. Most IDEs are pretty good in that regard. You add a file and the IDE just does the right thing based on the file extension (one only occasionally needs to set a few options).

Being a professional web developer, I also expect that a web focused build tool will have lots of support for the complex asset pipelines of modern web apps. This is an area where Visual Studio definitely falls down. This is also an interesting and unique requirement for build tools. Traditional tools like “make” rarely had to deal with situations like this. Compiling a C++ app is mostly about finding all the source code, compiling and linking. There was rarely much more to it than that. For web applications, there are many different kinds of files that must each pass through their own long pipeline of transformations and combinations. Additionally, the result isn’t a single executable, but a large assemblage of files. That makes the need for good file clean-up much more important. Finally, developer expectations have risen since the original days of make tools. Then we were happy to get an executable when we ran the make command. Now, we expect file watching with incremental rebuilds and live browser reloads.

Here is a brief list of the steps/actions I expect will be present in most JavaScript web app builds. Of course, they will vary from project to project and by developer taste.

  • Transcompiling JavaScript: CoffeeScript, Dart, Babel, Traceur etc.
  • JavaScript Transforms: wrapping in modules or ng-annotate etc.
  • Bundling/Concatenation: combining of scripts and styles into multiple files
  • Minification: scripts, styles and html
  • Source Maps: for both scripts and styles
  • CSS Preprocessor: Less, Sass, Stylus etc.
  • Style Transforms: Autoprefixer, PostCSS etc.
  • Cache Busting: renaming files with a hash to prevent incorrect caching
  • Image Optimization
  • Compiling Templates: Mustache or HandlebarsJS, Underscore, EJS, Jade etc.
  • Copying Assets: html, fav icon etc.
  • Watching for Changes
  • Incremental Rebuild
  • Clean Build: deleting all files at start or preferably cleaning up files as needed
  • Injecting References: generating script and style tags that reference the bundled files
  • Build Configurations: separate Dev, Test and Prod configuration, for example to not minify html in dev build
  • Serve: running a development web server
  • Running Unit Tests
  • Running JavaScript and CSS Linters: jshint, csslint etc.
  • Dependencies: handle dependencies on npm and Bower packages, Browserfy etc.

Findings

So what did I find? There were some common failings in all the build tools. One was the presumption that npm and Bower package restore were outside the scope of build. Unlike with an IDE such as Visual Studio that automatically runs Nuget package restore as needed, none of the JS build tools attempts to. This was often impossible to work around due to the fact that the build tool required it be one of the npm dependencies of your project and would not run until it was restored. Several of the tools did have plug-ins supporting Bower package restore. However, almost all examples I saw assumed one would first run npm install then bower install before building. They didn’t even setup bower install as an action triggered by npm install. Another shortcoming of all the tools is the handling of dependencies. This isn’t entirely their fault since the JavaScript ecosystem is still trying to figure out the issue. Apparently, npm packages work fine in Node.js (though the reason for the deep paths and duplicate packages created in node_modules are beyond me). But front end packages are not nearly so well thought through. Unlike a code library, a front-end package has lots of different assets which may be in many different languages, each of which has its own way of referencing one another. Furthermore, depending on the project toolset, the build might need dependencies either before or after compilation. For example, some projects needs the less files of bootstrap, while others need the compiled css. So how is one to express and control how those assets are to be brought into the project? Bower isn’t the answer if you ask me (at least not yet). Referencing a front-end package needs to become as simple as referencing an assembly or Nuget package is in .NET. Until it is, setting up builds involving dependencies will always be a pain. With that understanding of the shared issues with JavaScript build tools, let’s now consider each tool in turn.

Grunt

Grunt seems to be the starting point for many people when it comes to JS Build Tools. It certainly appears to be the most widely used one. However, the GruntJS.com website proudly proclaims Grunt as “The JavaScript Task Runner” and I think that is a good description. Grunt almost doesn’t qualify as a build tool. It really is a system for running a sequence of commands/tasks. Each Grunt task is a plug-in that represents a tool one might run from the command line. Each plug-in defines what its configuration in the Gruntfile is named and what the configuration options are. The fixed section names mean one can’t organize or name the configuration in a way that makes sense to them. Though there are many commonalities in plug-in configuration, in practice you are forced to read the docs of each plug-in to have any idea how to configure it. That means the otherwise acceptable docs are marred by the often poor and cryptic documentation of individual plug-ins. Since each plug-in is essentially a command line tool, they each take a set of input files and a set of output files. To chain together the long asset pipelines commonly needed requires one to manage an ever expanding set of intermediate temp files. File watching and live reload are manually configured by specifying tasks to run when files matching file name patterns are modified. The server configuration options are still totally opaque to me. Incremental builds must be manually setup using grunt-newer. In practice, that means having a deep understanding of how each task makes use of its various input files and which of the cascading temp files will need to be rebuilt. All of this manual configuration leads to enormous, inscrutable configuration files. If you need to run a couple tasks, grunt will be fine for you. If you need a real web development build tool. I strongly suggest you look elsewhere.

Gulp

Gulp is the hot new tool. It is rapidly gaining in popularity and there are some justifiable reasons for this. It recognizes and attempts to handle the primary unique requirement of web build tools, namely the long pipelines. In Gulp all plug-ins work on streams (well as of today there are still a number that don’t, which is very annoying). This streaming model allows Gulp tasks to be chained together in streams without worrying about or even producing intermediate temporary files. Gulp touts its high performance due to both streams and parallel task running. Yet, many users have complained that the parallel task execution is confusing and there are plans to extend the configuration to give more control over this. Personally, I found the parallel task execution very intuitive and feel the proposed cure is worse than the problem.

While streams are a compelling base for a build tool, I ultimately found Gulp lacking. Since the core configuration options are minimal, the attitude seems to be that little documentation is needed, but I found the documentation inadequate. Like Grunt, it was necessary to rely on the usually poor documentation of the plug-ins. This was somewhat mitigated by the greater consistency in plug-in usage versus Grunt. The Gulp authors believe in opinionated software and have some strong opinions when it comes to Gulp plug-ins. Normally, I appreciate opinionated software, but in this case I have to disagree with the authors’ opinions. They believe that a tool that already supports streaming should not be wrapped in a plug-in, but used directly. This runs counter to making builds as easy as possible to setup because it requires one to learn and adapt the tools’ unique APIs to the Gulp system. For a peek into the mess this causes, check out the GitHub issue over the blacklisting of the gulp-browserify plug-in. In it, dozens of users attempt to figure out the correct way of using Browserify. I don’t think any quite hit upon the difficult to find official recipe. But note, even that fails to support the important case of streaming files into Browserify. The whole issue is complicated enough that it takes an excellent but lengthy blog post to fully explain. Furthermore, the streaming model becomes confusing for things that don’t fit into it, such as cleaning. Source maps in particular are odd. I attempted to combine external source maps with cache busting with a manifest file and cleaning up old revisions and while the final config was quite short, it took me several hours and additional plug-ins to get working. Watching still requires manual configuration and incremental building is quite confusing to setup. Figuring out how to do anything is made worse by the plethora of plug-ins for doing the same task. Finally, while I don’t understand the issue myself, I found many discussions of needing to do extra configuration to get reasonable error messages out of Gulp.

Broccoli

Broccoli.js is a newer build tool that has been in the works for a while. It doesn’t seem to have much traction yet, but hopefully its inclusion in the ember command line tool will change that. I really like that the developer has thought through most of the issues facing a JavaScript build tool. Support for watch and incremental build is built in so that Broccoli “will figure out by itself which files to watch, and only rebuild those that need rebuilding”. It achieves that through a sophisticated caching system built on top of its core abstraction of file trees. Trees are Broccoli’s answer to Gulp streams and are easily chained. To simplify the system, they don’t run tasks in parallel like Gulp, but claim that it isn’t necessary because performance is still good. I highly recommend you check out the author’s Comparison With Other Build Tools (scroll down to section 5).

Despite all these excellent attributes, Broccoli is still very immature and in beta. It is lacking in the area of documentation. The read me boldly states windows support is spotty. The fact that there is little to no source map support was a big issue for me. It seemed to me that Broccoli wasn’t quite ready for prime time, but I’d really like to see it mature and supplant Grunt and Gulp.

Brunch

Brunch has apparently been around for quite a while and yet flown under the radar. Brunch relies heavily on convention over configuration and assumes your build pipeline will include most of the steps I laid out above. When you include a plug-in, it is assumed to apply at the logical point in the pipeline and requires little or even no configuration. This means that a config that might be 600+ lines in Grunt or 150+ lines in Gulp could end up being 20 lines in Brunch. Brunch is written in CoffeeScript and all the examples are in CoffeeScript, but it isn’t too hard to translate to JavaScript. It natively supports watching, and incremental compilation with no extra config whatsoever. There is direct support for Development vs Production builds with different configurations (other build configurations are easily setup). It even wraps JavaScript modules for you automatically (configurable).

With all that, you might think we had found our answer to JavaScript builds. However, there are always problems. Despite the lengthy Guide and the Docs, I still found myself wishing for clearer explanations and more examples. I was surprised to find no support for clean builds baked in. Hopefully, this will be addressed due to the issue I filed. I’m still trying to figure out the right way to run unit tests with Brunch. The real issue for me was node and Bower packages. Since Brunch attempts to manage your code more and even wraps JavaScript in modules, it doesn’t seem to play well with packages. They claim support for Bower, but this wasn’t well enough documented for me and still didn’t seem to quite be right. It appears these issues are because Brunch pre-dates the widespread use of npm and Bower and they are looking at how they might solve these issues. In the mean time, be prepared for some awkwardness around that and around Browserify. Lastly, as is common with systems focusing on convention over configuration, if you stray off the established path you might have some issues. For example, I am still trying to figure out exactly how I can get cache busting to apply to assets like images if I am compiling the templates that reference those images.

Conclusions

I am disappointed by the state of JavaScript build tools in 2015. They are very immature and don’t seem to be well informed by the long history of build tools for other environments and platforms. Additionally, their authors don’t seem to be sitting down with the goal of really working through all the issues that come up in a build tool and making sure they are handled and the tool is easy to use. Only two, namely Brunch and Mimosa seem to have set out with the goal of making most builds very simple to setup and not requiring lots of boilerplate configuration.

So which tool would I recommend? Personally, I am still working with Brunch to see if it will work for my project. It would be at the top of my list right now (if you like the convention over configuration approach of Brunch, you could also check out Mimosa). However, if you don’t mind the large awkward configurations then you might want to select Gulp or even Grunt just for the size of the community around them and the selection of plug-ins (though I haven’t had a problem finding the Brunch plug-ins I need). I’m really interested in watching the development of Broccoli and hope it becomes a viable option in the future. The ultimate answer is likely to be a tool that doesn’t even exist today. Once package dependencies are ironed out correctly, I think we are likely to find all the existing tools don’t quite make sense anymore. Of course, there is no telling how long it will take to get to such a state.

]]>
What I Think CoffeeScript Should Have Been 2014-04-16T00:00:00+00:00 6af478db-fa08-4a47-9f4d-5786a5fa9b3b In the first post of this series I laid out why JavaScript is like a minefield. In subsequent posts I laid out why I believe that TypeScript, Dart and CoffeeScript are not the answer to the problems posed by the JavaScript Minefield. I then took a post to consider whether JavaScript linters are the answer; they’re not. That has finally brought us to the point where we are ready to discuss some ideas I have about what an answer to the JavaScript minefield might look like.

My father was a recovering alcoholic, who was sober for 30+ years before his death this past fall. I don’t remember a time before he was sober, but I do remember going to AA meetings with him when I was a little boy. Those meetings were often opened and closed with the Serenity Prayer. Over the years I have found this simple prayer to hold a great deal of life wisdom for dealing not only with addiction, but most of life’s challenges.

God, grant me the serenity to accept the things I cannot change,
The courage to change the things I can,
And wisdom to know the difference.

It might seem trite or trivial to do so, but it can be applied to the JavaScript minefield. The reality is, JavaScript is the language of the web. That is unlikely to change any time soon (despite Google’s hopes to the contrary with Dart). Furthermore, JavaScript is littered with land mines. This puts developers in a difficult predicament. We want to develop great software for the web, but we also want to have great tools. How can we change this situation to make it better? We can’t entirely and we will need the serenity to accept what we cannot change about the situation. Despite that, there are ways to address some of the problems with improved tools and languages. That path has been opened to us by the example of TypeScript, Dart and CoffeeScript, but it takes real courage to imagine an alternative to the status quo and even more to work to change it. More important than either serenity or courage, is wisdom to know when to apply each. Without that we easily go astray trying to change the unchangeable, or timidly fall short of the mark.

I’ve worked in JavaScript for a number of years now in my professional career, and I have to admit that I have never particularly enjoyed the experience. I’ve also written a small JavaScript library for input masking where unit testing was critical to ensuring correct behaviour. In that experience I got to really dig into the JavaScript object model. Through all that, I have felt that there must be a better way. All tools and languages have shortcomings and are better suited to certain tasks than others. However, compared to languages like C#, Java and C++ that I have worked in or languages like Scheme and Haskell that I have studied, JavaScript has more than its share of problems. If I had the power to design a language from scratch to be the language of the web that ran in all browsers, I would probably design some kind of statically typed multi-paradigm language combining the best of functional and object-oriented approaches. Unfortunately, we have inherited JavaScript and I recognize that is something outside my power to change. It is, however, possible that we could create an open source language compiled to JavaScript that could dramatically improve on it. I know many are attempting to do just that with varying degrees of success. I believe that no one has yet hit upon the right mix of features and syntax for such a language, and the language that will come to be seen as the successor to JavaScript has yet to be created. Based on my experience, research and ascetics I humbly propose 4 guiding principles or philosophies for such a project.

  1. It’s Just JavaScript, but Better
  2. Syntax Matters
  3. Embrace Prototypes
  4. Interoperate, don’t Imitate

Let’s explore each and see how they might be embodied in some proposed syntax. Please note that all code examples are just one possible syntax out of many.

It’s Just JavaScript, but Better

The example of CoffeeScript has clearly shown the advantages of an “It’s Just JavaScript” approach. A clear and direct mapping between the language and JavaScript makes it easy for the developer to understand what is happening and what they can expect. It aids early adoption when support for source maps may not be available yet. It provides a clear path to JavaScript interoperability. Importantly, this approach greatly simplifies the problem for an Open Source project that doesn’t have the resources to tackle a whole new platform approach the way Google has done with Dart. However, being just JavaScript doesn’t preclude deviations in semantics, not just syntax. CoffeeScript often shows an unwillingness to change JavaScript’s semantics even when it would be easy to do so and provide significant improvements to the developer. Instead we should strive to improve on JavaScript semantics when possible. That’s why, rather than “It’s Just JavaScript” or even “It’s Just JavaScript, the Good Parts” I propose “It’s Just JavaScript, but Better”.

Proposed Syntax

In syntax, “boring” is good. Deviating from syntax that is widely known without a good reason will probably just confuse people and hurt language adoption. I propose a largely C-style syntax with proper variable scoping. That is to say, you can’t use a variable or function before it is declared or outside the scope it is declared in. C# has carefully considered a lot of the issues and won’t, for example, allow one to shadow a variable or parameter in a method with another in the method. Scoping should just work, not cause any surprises and protect against unintended behaviour. Semicolons should be required statement terminators and white-space shouldn’t be significant. Parenthesis, commas and curly braces are required in the same places languages like Java, C# and JavaScript require them. All of that creates a language that is clear, unambiguous, easy to read and fast to compile.

forkeys(var key in obj)  // loops through Object.keys(obj)
{
  var inner = obj[key];
  inner.method(outer); // error can't access outer before it is declared
}
inner.method(); // error can't access inner that is not in scope
var outer = 3;

To improve on JavaScript, each operator should have a single result type regardless of the types passed to it. This rule is followed in other languages with implicit conversions like VB.NET. Although the flexibility of JavaScript’s boolean operator leads to cute idioms like using || for null coalescing, it does nothing for readability or bug prevention. Boolean operators should always result in a boolean value.

var x;
x = false or 5; // x == true;
x = "5" + 5;    // x == 10
x = 5 & 5;      // x == "55"
x = 1 == "1";   // false (== compiles to ===)
x = null ?? 5;  // x == 5, ?? returns the right side if the left is null or undefined
x = 4?;         // x == true, ? checks value is not null or undefined
// the existential operator '?' can also be combined
x = something?.method?(); // simply returns null rather than throwing an exception on null

To increase safety, all code should compile to JavaScript strict mode and be wrapped in an IIFE. Additionally, all uses of global scope should be explicit.

::console.log("diagnotistic message"); // use :: to access global
use $; // pull global variables into the local scope explicitly
$('#myButton').click(); // because of 'use' the '$' doesn't need '::' before it

Syntax Matters

The words you use for something matter. Words and syntax can either aid in comprehension and remind us of how things work, or can obscure meaning and consistently mislead. Having syntax for something can change how you think about it, how often you use it and ultimately how you program. The first time I remember being truly struck by that was when anonymous inner classes were added to Java. For those not familiar with Java, that feature allows one to create an unnamed “class” inside a method and instantiate an instance of it that has access to the containing scope. It is very much like being able to declare an object literal inside a function in JavaScript and have access to the function closure. In Java, there are enough restrictions on anonymous classes to make for a straight forward translation to Java without that feature. In fact, that is how the compiler implements it. Nevertheless, this feature provided syntax for something and changed how most people wrote Java code even though it didn’t strictly add any capabilities. It was always possible to do before, but was so awkward that it was rarely done. For JavaScript the situation is a little different. With JavaScript there is certain semantics that we will not be able to change without too great a performance or complexity penalty. In those cases, we need to come up with syntax that clarifies the semantics of JavaScript.

Proposed Syntax

I have repeatedly raised the issue of the confusing semantics of this in JavaScript. Attempts to improve JavaScript have generally made no attempt to address this issue. One of the two pillars of my approach is a new syntax for this. In other object oriented languages, this always refers to the current instance of the class lexically containing the occurrence of this. In JavaScript, it actually has a significantly different meaning, but the value of this ends up being equal to what it would have been given the more common definition in many situations. That is the root of the confusion. In order to create a syntax that clarifies this, we must first ask what are the semantics of this in JavaScript. The value of this can be determined in four different ways. Namely, through invoking a function after the . operator, invoking the function with a value for this using call or apply, or by invoking it as a constructor function using the new operator. Additionally, it can be fixed to a particular value by calling bind. The value of this could be better termed the context of the function call. A further confusion arises when functions are nested and have different contexts. In that situation, it is far too easy to accidentally reference the wrong context with this, because one forgets that the context of the nested function is different from the context of the outer function. This mistake is so frequently made because the more common definition of this is lexically based and the nested function is lexically nested, so it seems it should have the same context. I propose that both of these issues can be clarified by making the context a special named parameter of the function. Perhaps listed before the other parameters and separated from the rest by a dot rather than a comma, since the most common way of establishing the context of a function is with the dot operator.

use $; // use jQuery
$("li").map( (item . index) -> // jQuery passes each item as the context to the function
  {
    return cssClass -> item.hasClass(cssClass);
  });

You see above that I have also shortened the verbose function syntax to simply ->. The arrow is widely recognized as a function operator and is a proposed addition to ES6. I propose that it be the only function declaration syntax. Additionally, the return line shows that the parenthesis around the argument can be omitted when there is a single argument and the body can be shortened when the function returns a single expression. This functionality is like lambda expressions in C#. So the return line is equivalent to return (cssClass) -> { return item.hasClass(cssClass); };. Notice how there is no ambiguity over the context in the inner function because the context of the outer function is clearly and uniquely named. The typical mistake in JavaScript would be to write that line as return function(cssClass) { return this.hasClass(cssClass); };, but that would be incorrect because this does not refer to the context of the outer function inside the inner function.

A fully defined JavaScript alternative would probably have many other niceties like splats, array slicing, string interpolation, date literals, modules, default parameters, a call operator (perhaps ..) and some kind of asynchronous programming support like C#, but built on top of promises and callbacks.

Embrace Prototypes

JavaScript doesn’t have classes. Instead it has object prototypes and constructor functions. Yet, many developers yearn for classes and try to extend JavaScript with class like functionality. However each implementation of classes in JavaScript is different and many are incompatible with each other. JavaScript developers can’t even agree on whether/how to use the new keyword and declare constructors. As many authors, including Douglas Crockford, have noted, it is much simpler to embrace the prototype based nature of JavaScript and avoid not only classes but constructor functions by using Object.create(...) to directly setup prototype chains. David Walsh has a great three part write up on this (part 1, 2, 3). By accepting this simple truth and embracing it we can greatly simplify both the semantics and syntax of our language. Following the principle that syntax matters we need a syntax that clarifies this. We can further simplify our language if we accept that JavaScript doesn’t have private properties (at least without introducing a reasonable amount of overhead).

Proposed Syntax

I think the syntax for objects is important and while I propose a concrete syntax here, I think there is further room for improvement. A syntax for prototypical inheritance is the second pillar of my approach. The essential idea is that all object construction is through object literals, we simply need a way to specify the prototype object when declaring an object literal. Additionally, our syntax can be much clearer if object literals have a clearly distinct syntax from function bodies (this is needed to support the lambda expression style syntax described before).

var anObject = {**}; // an empty object, equivalent to { } in JavaScript

use console&.log; // The & is the bind operator used to bind the context
                  // so this is equivalent to console.log.bind(console)

var animal =
  {**
    // the constructor property of an object is special. When a
    // object is created with animal as it's prototype the
    // the constructor will be called on it (unless it has it's own
    // constructor).
    constructor = (animal . name) -> { animal.name = name; },
    legs = 4,
    eat = (animal . food) ->
      {
    // Here we see the ability to embed expressions in strings
        log('{animal.name} ate {food} on {animal.legs} legs');
      },
    speak = -> { throw "speak not implemented"; },
  };
  
var dog =
  {*animal* // here we declare what the prototype is
    // We should call the animal constructor from our constructor.
    constructor = (dog . name) ->
    {
      // Notice we use the proto keyword to access it's
      // prototype.  When a function is called on proto, the
      // context is the original object (in this case dog)
      dog.proto.constructor(name);
    },
    // As in Javascript, this speak property will hide the prototype
    // speak property.
    speak = -> { log('Woof!'); },
  };
  
 var cat =
  {*animal*
    constructor = (cat . name) ->
    {
      cat.proto.constructor(name);
      // after calling the proto constructor, we can do our
      // own initialization
      cat.lives = 9;
    },
    eat = (cat . food) ->
    {
      if(cat.lives > 0)
        cat.proto.eat(food);
      else
        log('The {food} sits uneaten :(');
    },
    speak = (cat.) -> { if(cat.lives > 0) log('Meow!'); },
    oops = (cat.) -> { if(cat.lives > 0) lives--; },
  };

  // Now let's play with out pets

// This is how we pass parameters to the constructor
var myDog = {*dog("Skip")*};

var myCat =
  {*cat("Fluffy")*
    // There was already a horrible accident
    legs = 3,
    lives = 8,
  };

myDog.speak();// prints "Woof!"
myDog.eat('table scraps');// prints "Skip ate table scraps on 4 legs"

myCat.speak();// prints "Meow!"
myCat.eat('a mouse');// prints "Fluffy ate a mouse on 3 legs"

// Functions always return the context unless a return value is specified,
// so we can chain our calls to oops() (even return; will return the context)
myCat.oops().oops().oops().oops() // Someone has it out for fluffy

Note how using the = operator in object declarations increases consistency across the language. Unlike in CoffeeScript, JavaScript property attributes (introduced in ES5) should be considered.

var demo =
  {**
    hidden items = [], // not enumerable
    get count = demo. -> demo.items.length, // a getter
    readonly field = 42, // not writeable
    'a property name' = "something",
  };

Support for property attributes will have to be carefully planned. Additionally, it needs to be considered how private properties would be supported if they were added to JavaScript.

There are some other possible syntaxes for object literals. I welcome suggestions for this. Keep in mind that it has to be distinct enough that the compiler and programmer can distinguish it. The syntax can’t be ambiguous with existing operators or when used with the lambda form for -> described above (i.e. there must be something more than just { to start the object). It should also imply the creation of an object with a prototype (which is not the same as inheritance). Finally, it is worth considering how difficult it is to type including on international keyboards (many language avoid $ for that reason). I’ve come up with a few more considering those criteria.

// The ~ could be taken to mean 'like a', so that
var myDog = ~dog{name='skip'}; // is an object like a dog
var something = ~{prop=5}; // is a general object like all objects
var empty = ~{};

// Alternativly, the ~ could be moved inside the braces
var myDog = {~dog, name='skip'}
var something = {~, prop=5};
var empty = {~};

// Other operators could be used inside the brace instead
var myDog = {>dog, name='skip'}
var something = {>, prop=5};
var empty = {>};
// or

var myDog = {%dog, name='skip'}
var something = {%, prop=5};
var empty = {%};

Interoperate, don’t Imitate

The last principle that I propose guides how this new language should interoperate with JavaScript and whether it needs to be powerful enough to express every valid JavaScript program. I think interoperability with all JavaScript features is critical. We simply can’t predict how libraries will make use of JavaScript features and must make sure that they can’t break the user experience of our language. For example, I think it is very bad that CoffeeScript assumes property access is side effect free and idempotent (returns the same value repeatedly). Those are simply not true if the property is actually a getter. While no code written in CoffeeScript will have getters, libraries will increasingly use them as a larger percentage of installed browsers support them. Decisions like that could easily be the downfall of a language. What if the next great library, the next “jQuery” relies on them heavily? That could be the end of CoffeeScript, or at least a very painful transition for CoffeeScript users as the problem was fixed.

While interoperability is critical, that doesn’t mean it must be necessary to fully use every language feature. I have already proposed abandoning the ability to directly use the new operator or to easily declare constructor functions. It may be the case that not every property attribute or operator is important to have.

Proposed Syntax

While there should be no new operator. It would be possible to invoke constructor functions declared in other libraries by using them in place of object prototypes.

var birthday = {*Date("7/10/1980")*};

Another way the language can “Interoperate, not Imitate”, is to provide functionality that is similar to but not equivalent to JavaScript. For example, the typeof operator should be improved to at least return something more reasonable for the null value. Additionally, the instanceof operator should be replaced with something that checks directly against the prototype chain rather than falsely implying that there is such a thing as an object type tied to a constructor function.

What Now?

Having laid out these principles and proposed some concrete syntax based on them, the question becomes where do we go from here? I am seriously considering whether it is worth the effort of creating another JavaScript alternative. I have always been interested in programming languages and have the experience necessary to implement a compiler. However, I also know what a large commitment a project like that would be. Were I to start such a project, I would probably try to have an open dialogue process where people could provide input into language features and syntax and reasons for design decisions could be documented.

Gauging interest in another JavaScript alternative has been a major motivation in writing this series of articles. To that end, please take a moment to answer the question below. Furthermore, if you would be interested in contributing to such a language in any capacity whether it be coding, managing or design input please email me.

Do you think a language like the one being proposed here should be made?
No
Yes, if the creator is excited about it, why not
Yes, another language choice would be good
Yes, and I would try it out
Yes, and I would definitely use it
Poll Maker

This article is Part 6 in a 6-Part Series.

]]>
Are JavaScript Linters the Answer? 2014-04-03T00:00:00+00:00 06a34377-2325-466f-8a3f-fbb6bd6c6703 In the first post of this series, I explained how JavaScript is like a Minefield. Since then, I have argued that TypeScript, Dart and CoffeeScript are not the answer to the minefield. I’m almost ready to move on to considering what the answer is, but first I feel the need to address a couple points raised by commenters on that first post. They focused on the use of so called JavaScript linters, the two most popular of which are JSHint and JSLint. These tools check your JavaScript code for adherence to certain best practices and provide warnings/errors when they don’t. For such tools to be helpful, they really need to be integrated into your build pipeline so you are receiving constant feedback. If a developer has to manually run the tool, it likely won’t happen, or will happen so late there will be a large number of issues to fix.

These tools are absolutely helpful and can mitigate many of the common JavaScript land mines. As a simple example, when using JSLint, the following code produces several errors:

var x, y;
function main() {
  return x == y;
}

JSLint reports both that the code is “Missing ‘use strict’ statement” and that it “Expected ‘===’ and instead saw ‘==’”. JSHint by default doesn’t warn about those two particular things, it is generally less strict, but it has the eqeqeq and strict options to enable checking for them.

Signs Don’t Remove the Mines

Despite how helpful these tools can be, I don’t think they can ever really be considered an answer. Simply put, a sign doesn’t fix a problem. The problem still exists. If someone helpfully found every land mine in a minefield and put a sign next to them, the mines would still be there! The signs would certainly be beneficial, but an actual answer would be to defuse or remove the mines. Using a linter is better than nothing, but there is still increased cognitive load to avoid the pitfalls of JavaScript. The tool is just there to remind you when you slip up and teach you when you don’t know. Personally, I don’t need any more things to think about when I am trying to solve difficult programming problems. I want to focus as much as possible on the real business problem, not the difficulties of expressing it in code.

I recognize that this argument will sound abstract and like quibbling over semantics to some. They will say, if it lets me get the job done, it is a fix. I just can’t see it that way. To me, this is the most important part of why linters are not enough. Is it too much to ask for a programming language that doesn’t have such horrendous problems that I need a tool to help me avoid them? The very existence of such tools makes the case that JavaScript is a minefield.

Can’t Check Everything

Even with a lint tool, there are many things about JavaScript that won’t be checked. Additionally, even for issues that these tools address, JavaScript embedded in script tags in pages or in event attributes is generally not run through them. As an example of what is not addressed, tools do nothing to mitigate the confusing nature of the semantics of this.

(function () {
    "use strict";

    function Car(model) {
        this.model = model;
    }

    Car.prototype.drive = function () {
        return alert("You are driving a " + this.model);
    };

    Car.prototype.delayed = function () {
        return function () {
            return this.model;
        };
    };

    var myRide, letsDrive, getModel;
    myRide = new Car("BMW");
    letsDrive = myRide.drive;

    letsDrive(); // alerts "You are driving a undefined"

    getModel = myRide.delayed();
    alert("Delayed model: " + getModel()); // alerts "Delayed model: undefined"
}());

This code passes the strict JSLint tests with a clean bill of health, but contains what are probably developer mistakes. I’m sure someone with more JSHint/JSLint experience could come up with many more examples. This is not to knock these tools, it just isn’t possible to check for these things. A real answer would address these issues, either by changing the semantics or syntax of JavaScript.

Code Reviews

One commenter believed that “practices which enforce code review of every check in can help catch almost all of the silly coding mistakes we make that JSlint or JSHint might miss.” Like the use of linters, I believe that code reviews are good practise, but again they don’t fix the problem. They are just a strategy for mitigating it. Pilots have co-pilots there to double check them for the same reason we do code reviews, but that doesn’t fix a poor cockpit design. Instead, controls that make it easy for pilots to make dumb mistakes are redesigned by plane manufactures to fix the underlying issue. That way the pilot and co-pilot can spend their time focused on getting everything else right, making the flight safer.

Don’t Code Without One

If you are writing JavaScript code, I strongly recommend you use a linter. On my current project I have dropped CoffeScript in favour of JavaScript with JSHint. However, I don’t think that has fixed the JavaScript minefield. Furthermore, it has added issues with false warnings and littering our code with JSHint directives that detract from the focus of the code.

In this series, I have considered the most popular responses to the JavaScript minefield. I haven’t found one that truly addresses my problems with the JavaScript language. Of course, there are many more languages out there and I could be missing a great one. Though, I haven’t heard of any that meet what I see as the high level criteria of an answer and address the specific issues of JavaScript. In my next post, I will begin to explore what an answer might look like.

This article is Part 5 in a 6-Part Series.

]]>
Why CoffeeScript Isn't the Answer 2014-04-11T02:00:00+00:00 497c34ae-a14c-493f-acaa-6f2ef91d152a CoffeeScript is an open source project that provides a new syntax for JavaScript. I have to say that I have a lot of respect for CoffeeScript and it got a lot of things right. The “golden rule” of CoffeeScript is “It’s just JavaScript”. That means there is a straightforward equivalent to every line of CoffeeScript. Consequently, there aren’t as many issues with JavaScript interop as there are with Dart (see Why I’m Ditching CoffeeScript by Chris Toshok for a discussion of how this isn’t true for accessor properties and a peek into the politics of open source). It also makes it easy to deal with any shortcomings in tooling, because the developer should be more comfortable switching between CoffeeScript and the compiled JavaScript, for example, when debugging in the browser without the aid of a source map. Though CoffeeScript has matured enough that a lot of those shortcomings have been resolved. Nevertheless, the assurance that you can read the compiled output is comforting. CoffeeScript programs can be very compact and require less typing. The language also protects you from many of the JavaScript land mines. For example, in CoffeeScript == compiles to the strict equality operator ===, making it impossible to use the dangerous loose equality operator.

The strengths of CoffeeScript were the reason that I argued for its use on the last project I was part of. Indeed, we adopted CoffeeScript and used the Mindscape Web Workbench extension for Visual Studio which compiled on save. For me, the compile on save was helpful in learning CoffeeScript because I could immediately see what the resulting JavaScript was without firing up a browser, as required by many of the other options that compile on the fly as part of an asset pipeline. If you’re seeking a similar approach you might also want to check out the popular Web Essentials plug-in. Though I found merging the compiled JavaScript files to by annoying enough that I am now moving away from that approach. I spent 6+ months on that project learning and working in CoffeeScript. Out of TypeScript, Dart and CoffeeScript I really can speak to the strengths and weaknesses of the last from experience. That experience has led me away from using CoffeeScript on my current project.

Ambiguous Code

I quickly found that there were situations where I couldn’t predict what a given chunk of CoffeeScript would compile to or couldn’t figure out how to write the CoffeeScript for the JavaScript I wanted. The problem arises because in CoffeeScript parenthesis, curly braces and commas are often optional and white-space and indention replace them. Frequently code I thought should compile didn’t. Consider this valid code.

func 5, {
   event: -> 45,
   val: 10}

Now, if the event function needs to be expanded, one would logically think it could be changed like this.

func 5, {
   event: (e) -> 
     if e.something
       36
     else
       45,
   val: 10}

However, that doesn’t compile. You have to drop the comma or move it onto the next line, before val.

How about something like func1 1, func2 2, 3? Does 3 get passed to the first or second function? I still frequently forget one place where parenthesis are not optional, leading to unexpected behaviour. The statement x = f -> f looks like it should assign x to the identity function, but it doesn’t. Instead, x is assigned the result of calling f with a function of no arguments returning f. Or how about this, a + b adds a and b while a +b calls a with the argument +b. There are lots of other examples of ambiguous and confusing syntax in CoffeeScript. For more examples, check out My Take on CoffeeScript by Ruoyu Sun and this Gist from Tom Dale.

Readable, Think Again

Ambiguous code is only one part of what makes CoffeeScript difficult to read. Everything is an expression (returns a value) and it has lots of control flow and operator aliases. All of which encourages very English sentence like code. However, that often makes the code less readable, not more. The human mind is good at understanding logic in symbols; English is not good at expression logic. As an example, consider the line eat food for food in foods when food isnt 'chocolate' from the CoffeeScript tutorial. The declaration of what food is occurs in the middle of the line and doesn’t even look like a variable declaration. Furthermore, until you finish reading the line it isn’t clear which foods will be eaten. That code could easily be worse if unless eat is undefined was added to the end, making the whole line conditional. One wouldn’t realize it was conditional until reading the end. Imagine if the expression before the for had been a complex multi-line method call with logic in it. Ryan Florence digs deeper into these issues in his post A Case Against Using CoffeeScript. Suffice it to say, many of the features added to CoffeeScript with the intent of making it “readable” actually have the opposite effect.

Variable Capture

In addition to the these confusions, CoffeeScript actually creates new mines that aren’t present in JavaScript at all. As explained by Jesse Donat in CoffeeScript’s Scoping is Madness, in CoffeeScript’s zeal for terseness and “simplicity” it has actually created a major pitfall around variable declarations. In CoffeeScript there is no variable declaration syntax equivalent to var in JavaScript. Instead, all variables are declared in the scope they are first assigned in. This means it is easy to accidentally change the scope of a variable and not even realize it.

To see how this would happen, imagine you are in a hurry to implement the next feature. Unbeknownst to you, the following code is near the bottom of the file you are about to modify.

innocent = (nums) ->
lastValue = 1
for num in nums
doSomething lastValue, num
lastValue = num
lastValue

Now, near the top of a code file you add these lines. So that the file is now.

value = 42
lastValue = null
changeValue = (newValue) ->
lastValue = value
value = newValue


# many pages of code here


innocent = (nums) ->
lastValue = 1
for num in nums
doSomething lastValue, num
lastValue = num
lastValue

Did you catch the error? Originally, lastValue was local to the innocent function, but now it is global to the file and the innocent function actually modifies it. We now have a bug waiting to happen when someone calls innocent then checks the value of lastValue. Keep in mind there could be multiple screens of code between these two code segments.

Not Far Enough

Certainly, a language that is “just JavaScript” won’t be able to fix everything. Yet, despite radically altering the syntax of JavaScript and claiming to only expose the good parts of JavaScript, in some ways CoffeeScript doesn’t go far enough in fixing the issues of JavaScript. For example, the + operator is still both numeric addition and string concatenation. That is frequently listed as one of the bad parts of JavaScript. Why no provide separate operators for the two? Likewise, the typeof operator “is probably the biggest design flaw of JavaScript, as it is almost completely broken” according to the JavaScript Garden, but CoffeeScript brings its behaviour over unchanged. Instead CoffeeScript could have altered the meaning of typeof to something that was more useful, for example the typeOf() function recommend by Douglas Crockford.

Classes Are an Illusion

Many developers wish that JavaScript had classes. This has led to numerous alternative ways to emulate class like functionality in JavaScript and libraries that embody those various approaches. Many of the different approaches don’t interact well with each other. There is even debate on whether to use constructors requiring the new keyword or to make everything a factory function. CoffeeScript has a class keyword that makes it easy to create classes. However, since “it’s just JavaScript” they are just one particular emulation of classes on top of JavaScript. Consequently, they may not play well with your library of choice. More troublesome is that they make other language features more confusing. In particular the this keyword. When creating a class you can’t help but feel that this should refer to the class everywhere within the body of the class. That is what it would mean for something to be a class. But the actual semantics of this are unchanged. For example, if you declare a class with methods using this.

class Car
  constructor: (@model) -> 

  drive: () ->
    alert "You are driving a " + this.model;

  delayed: () ->
    -> this.model

Because Car is a class, one expects this to always refer to the current car object, but instead it behaves like regular JavaScript.

myRide = new Car("BMW")
letsDrive = myRide.drive

letsDrive() # alerts "You are driving a undefined"

getModel = myRide.delayed()
alert("Delayed model: " + getModel()) # alerts "Delayed model: undefined"

Ultimately, JavaScript isn’t a class based language and classes never quite work right in it. I believe it is a mistake to push the language in that direction. For a more in-depth discussion of why this is the case, see JavaScript Doesn’t Need Class by Ian Elliot.

I have been writing JavaScript for 8 years now, and I have never once found need to use an uber function. The super idea is fairly important in the classical pattern, but it appears to be unnecessary in the prototypal and functional patterns. I now see my early attempts to support the classical model in JavaScript as a mistake.

And So…

CoffeeScript started from a strong position and good philosophical approach to the problem of the JavaScript Minefield. Unlike TypeScript, it was willing to break with JavaScript and fix its issues. Unlike Dart, it was intended to remain true to JavaScript and be fully interoperable. Realistically, the issues I have raised with CoffeeScript don’t come up every day. Ultimately though, they are enough to say we need something better. I hope to share some thoughts soon on what such a solution might look like. However, based on feedback in the comments, we’ll need to discuss JavaScript linters first.

This article is Part 4 in a 6-Part Series.

]]>
Ember.js on Cassette: Embedding Templates 2014-03-22T15:23:00+00:00 aab2932d-a540-461c-a953-f7ccbb4fe738

I’m starting a new web application project, and I’ve decided to use Ember.js. That will enable me to provide users a slick single-page app experience. Since I am a .NET web developer by training, I plan to use C# and ASP.NET Web API as my server technologies. I could use tools like Lineman.js or Grunt to manage and package my client-side code. Instead, I’d like to use the Visual Studio toolchain, because it provides a lot of great tools for web application development. Since I already know Visual Studio, there will be almost no learning curve for me. I’m not trying to argue this is the right choice. I’m just trying to share a little about what it takes to make it work with Ember.js. Early on, I realized that I would want to bundle and minify all my scripts and templates. I’ve chosen the Cassette library for this which provides “asset bundling for .NET web apps”. Here is how I made it work with Ember.js, but first, why did I choose Cassette?

Why not ASP.NET Web Optimization

Ever since Microsoft released the ASP.NET Web Optimization framework, now available as a NuGet package, it has been the default choice on the .NET platform for bundling and minification of scripts and css. Indeed, I initially started with it. However, I quickly ran into a number of issues with it, all of which were made more challenging by the lack of documentation.

It provides support only for minification of *.js and *.css files by default and no support for languages like Less, Sass, TypeScript or CoffeeScript. I’ll be using Less, and possibly CoffeeScript. It seems the encouraged workflow for such languages in Visual Studio is to use the Web Essentials extension to compile them. However, with that, the compiled js or css files are included in the project and need to be committed to source control. I have found that to be problematic when doing merges in a DVCS. Also, it just seems like the wrong thing to do. Like committing the compiled assemblies of my .NET code. Even using third party or custom built extensions, it generally isn’t possible to mix languages in a single bundle. For example, I might want to combine some JavaScript libraries into a bundle containing CoffeeScripts I have written.

More serious than that, ASP.NET Web Optimization provides no support for compiling or embedding templates. In the official EmberJS template from Microsoft they use a beta version third party extension to provide template compilation for Ember.js and provide a 117 line class for embedding templates outside the the web optimization framework. The third party library appears to be abandoned, with 8 months since the last commit and still in beta as of March, 2014. Even with all that, the EmberJS template’s solution doesn’t switch between embedding and compiling based on whether the web optimization library is in debug mode, leading to possible problems.

Introducing Cassette

The Cassette library preceded Microsoft’s ASP.NET Web Optimization library and is arguably the primary alternative to it. If your familiar with how Web Optimization approaches bundling it may take a while to become accustomed to the Cassette approach. They use the term “bundle” somewhat differently, which is confusing when you are first learning. I recommend you read the “Getting Started” and “Assets and Bundles” sections of the Cassette v1 docs before switching and reading the the v2 docs. Those sections of the v1 docs explain basic concepts not explained in the v2 docs. Until I read them, Cassette wasn’t making any sense to me.

The main difference between the approaches is around what a bundle is. In the Web Optimization library, a bundle is a group of files that will be minified and combined into a single file. Referencing that bundle is including a reference to the combined file. Whereas, in Cassette a bundle is more like a group of files that work as a single dependency. Meaning that you would never want one file out of the bundle separate from another. Referencing bundles is then stating what the page’s dependencies are. The bundles are then minified and combined into three separate files for css, scripts and templates. For templates, Cassette supports the idea of embedding the templates rather than loading them from a separate file.

At the top of your cshtml page you reference any individual assets or bundles the page depends on:

@{
    Bundles.Reference("Scripts/jquery.js");
    Bundles.Reference("scripts/app"); // the main application bundle
    Bundles.Reference("Scripts/page.js");
    Bundles.Reference("Styles/page.css");
}

Then in the page body, you indicate where the html to include scripts, templates and style sheets should be rendered. This will then group the dependencies by type and, depending on whether Cassette is in debug or production mode, output either debug friendly assets or minified, compressed, cached, versioned assets.

<!DOCTYPE html>
<html>
<head>
    <title>Web App</title>
    @Bundles.RenderStylesheets()
</head>
<body>
    ...
    @Bundles.RenderScripts()
    @Bundles.RenderTemplates()
</body>
</html>

Cassette is a powerful library with many more features and options. Be sure to check out the documentation for the details. Additionally, the author of Cassette has created NuGet packages for drop in support for Less, Sass, CoffeeScript, TypeScript and more. One of those NuGet packages is for pre-compiling Mustache Templates using Hogan.js. Unfortunately, this doesn’t work for Ember.js, because it needs its own template compiler due to the get and set methods on Ember models. Never the less, we’ll see that all is not lost.

Configuring and Referencing an HtmlTemplateBundle

Setting up a bundle in Cassette for all your templates is very easy. Don’t install the Cassette.Hogan NuGet package since it doesn’t work with Ember.js. I choose to put all my templates in the app/templates folder following the example in the EmberJS template from Microsoft. The template uses the extension hbs for Handlebars templates, however I choose to use the also common extension handlebars, because it was more explicit. Since all the templates are in one directory, creating a bundle was an easy addition to my Cassette bundle configuration.

public class CassetteBundleConfiguration : IConfiguration<BundleCollection>
{
    public void Configure(BundleCollection bundles)
    {
        // Other configuration ...

        bundles.Add<HtmlTemplateBundle>("app/templates", new FileSearch()
        {
            Pattern = "*.handlebars",
            SearchOption = SearchOption.AllDirectories,
        });
    }
}

To reference the templates from my application page only required adding Bundles.Reference("app/templates"); to my bundle references. The templates were then automatically embedded in the page inside script blocks in place of the @Bundles.RenderTemplates() call right before the close body tag.

Giving Templates a “data-template-name”

There were some problems with the embedded templates at this point. The script blocks were being generated with an id attribute based on the path of the template file and the type attribute was set to “text/html” instead of “text/x-handlebars”. While there is some confusion over this, I believe that the data-template-name attribute is the preferred way of identifying your ember templates, rather than the id attribute. The reason is that nested route templates have names separated with ‘/’, but it is not valid to have the ‘/’ character in an html id. Fortunatly, Cassette is based around a very flexible pipeline model, making it easy to customize. After reading some of the documentation, poking around the source and reading some some code from the Cassette.Hogan package, I came up with a simple solution.

Cassette allows the bundle pipeline for any bundle type to be easily modified by implementing the IBundlePipelineModifier<T> where T : Bundle interface. All bundle pipeline modifiers are picked up automatically. Fixing the issues was almost as simple as setting the content type of the html template pipeline and swapping out the implementation of how templates were wrapped in a script block.

public class SetupHandlebarsPipeline : IBundlePipelineModifier<HtmlTemplateBundle>
{
    public IBundlePipeline<HtmlTemplateBundle> Modify(
            IBundlePipeline<HtmlTemplateBundle> pipeline)
    {
        // Set the content type
        var index = pipeline.IndexOf<ParseHtmlTemplateReferences>();
        pipeline.Insert(index, new AssignContentType("text/x-handlebars"));

        // Replace how the scripts are wrapped
        index = pipeline.IndexOf<WrapHtmlTemplatesInScriptElements>();
        pipeline.RemoveAt(index);
        pipeline.Insert(index,
            new WrapTemplatesInEmberScriptElements(new HtmlTemplateIdBuilder()));

        return pipeline;
    }
}

The new WrapTemplatesInEmberScriptElements class is basically the same as the old WrapHtmlTemplatesInScriptElements with the id= replaced by data-template-name=.

public class WrapTemplatesInEmberScriptElements : IBundleProcessor<HtmlTemplateBundle>
{
    private readonly IHtmlTemplateIdStrategy idStrategy;

    public WrapTemplatesInEmberScriptElements(IHtmlTemplateIdStrategy idStrategy)
    {
        this.idStrategy = idStrategy;
    }

    public void Process(HtmlTemplateBundle bundle)
    {
        foreach(var asset in bundle.Assets)
        {
            asset.AddAssetTransformer(new WrapTemplateInEmberScriptElement(bundle, idStrategy));
        }
    }
}

public class WrapTemplateInEmberScriptElement : IAssetTransformer
{
    private readonly HtmlTemplateBundle bundle;
    private readonly IHtmlTemplateIdStrategy idStrategy;

    public WrapTemplateInEmberScriptElement(
            HtmlTemplateBundle bundle,
            IHtmlTemplateIdStrategy idStrategy)
    {
        this.bundle = bundle;
        this.idStrategy = idStrategy;
    }

    public Func<Stream> Transform(Func<Stream> openSourceStream, IAsset asset)
    {
        return () =>
        {
            var template = openSourceStream().ReadToEnd();
            var scriptElement = String.Format(
                "<script type=\"{0}\" data-template-name=\"{1}\">\n{2}\n</script>",
                bundle.ContentType,
                idStrategy.HtmlTemplateId(bundle, asset),
                template
            );
            return scriptElement.AsStream();
        };
    }
}

An Exception

If you try the above code with version 2.4.1 or prior of Cassette, you’ll get an exception when it’s not in debug mode. I didn’t notice this issue until a week after writing the code above when I deployed to the test environment. The switch between production and debug mode can be confusing in Cassette, because it isn’t your first thought when something works on the developers machine but not the deployment environment. It took me at least an hour to track down the problem. In production mode the exception you get is KeyNotFoundException from deep inside Cassette around bundle cache code. Turns out that setting the content type to "text/x-handlebars" causes Cassette to not know what extension to give the cache file. That seems to be poor design choice to me, but essentially the fix is that "text/x-handlebars" needs to be added to a list of known content types. I have submitted a pull request to do this, that will hopefully be accepted soon, so this won’t be a problem in future versions. Until then, you can work around this by adding the following hack to the beginning of your CassetteBundleConfiguration.Configure() method.

// Hack so we can use type="text/x-handlebars" in release mode
var bundleType = typeof(Bundle);
var field = bundleType.GetField("FileExtensionsByContentType",
                                 BindingFlags.Static | BindingFlags.NonPublic);
var fileExtensionsByContentType = (IDictionary<string, string>)field.GetValue(null);
// Sometimes the config is run again and it is already there
if(!fileExtensionsByContentType.ContainsKey("text/x-handlebars"))
fileExtensionsByContentType.Add("text/x-handlebars", "htm");

With that, my Ember.js templates where embedded correctly into the page and developers could begin work with a clean separation of the templates into individual files. Obviously, before production release I would like to be able to enable compiled templates when Cassette is not in debug mode, but that challenge can wait for another day.

]]>
Why Dart Isn't the Answer 2014-03-08T00:00:00+00:00 e228a1a9-9305-43be-9749-bf0694827ca6 Dart is Google’s latest answer to how to do large scale web application development. Dart isn’t just a new programming language it is a “platform”. That includes having its own standard libraries and tools. Additionally, even though Dart compiles to JavaScript, there is also a Dart VM that runs in a preview version of Chrome called Dartium. The language itself will feel very familiar to developers who have worked in JavaScript, Java and C#. Like TypeScript it’s “optionally typed”. That means type declarations are optional, but when you provide them the compiler will provide type checking warnings. Unlike JavaScript, it is class based rather than prototype based. It fixes the problems with both the syntax and semantics of JavaScript

Dart is a much more ambitious project than Microsoft’s TypeScript. It moves further away from both the syntax and semantics of JavaScript. So the JavaScript produced by the Dart compiler, while quite readable, may not correspond one-to-one with the original Dart source. The Dart compiler applies more transforms and optimizations to your code. Beyond that, even core libraries are replaced. Dart has it’s own DOM manipulation library that differs from the standard one provided by browsers. This allows them to fix not only problems with JavaScript, but problems with the browser APIs which are widely regarded as being one of the worst parts of client side web development. This ambitiousness makes Dart an exciting project that appears to be a real improvement over the current state of affairs.

Before we look at some problems with Dart, a word about the Dart VM. Since Google wants Dart to eventually be the platform of the web, they are hoping that they can convince browser makers to include a native Dart VM. Since they control one of the big three browsers, they are already 13 of the way there. However, many people feel that it is unlikely the other browsers will follow suit. It wouldn’t seem to be to their benefit to spend the time and money doing so. When Microsoft tried basically the same thing with VBScript in IE, it didn’t go well. Admittedly, the browser market isn’t as contentious and political as it was then, but competitors will always be competitors. To address this, Google has the, quite effective, compile to JavaScript escape hatch. The situation will be different if all major include a native Dart VM some day, but for now, the idea of a Dart VM is irrelevant to whether Dart is the answer to the JavaScript Minefield.

I guarantee you that Apple and Microsoft (and Opera and Mozilla, but the first two are enough) will never embed the Dart VM.

Brendan Eich, creator of the JavaScript language & active partcipant in JavaScript standardization

JavaScript Interop

Dart is such a radical departure from JavaScript that it is not possible to interact directly with JavaScript libraries from Dart. Instead, you must use a special interop library that exposes wrapped versions of any JavaScript objects you access. This enables Dart to safely sandbox JavaScript away and prevent its problems from leaking into a Dart application. This is very reminiscent of what Microsoft had to do with COM interop, for .NET all be it for somewhat different reasons. Like COM interop, JavaScript interop is not a pleasant experience. It’s a necessary feature for times when the only implementation for a library you need isn’t in the platform you are working in, but whenever possible you avoid it. The problem with that is, it tends to silo you in the platform you have chosen. Currently, many new and exciting JavaScript libraries are being released and the Dart platform is immature and hasn’t had time to fill out with all the options a developer might want. Being siloed into the Dart platform will be a very high price to pay to avoid the JavaScript minefield.

Another GWT?

The Google Web Toolkit (GWT) is a project first released by Google back in 2006. It provides a platform allowing developers to create client side web application in Java that are then cross-compiled to JavaScript. The GWT project has a lot of similarities to the Dart project. Both create a siloed platform with restricted interop options that addresses the pitfalls of working directly in JavaScript and the browser. The largest difference is that GWT builds on an existing language (Java) and platform which are potentially not as well suited to the needs of web development and semantically more distant from JavaScript. Never the less, the history of the GWT project is instructive. While it was released with fanfare and promise, it has remained a niche solution and is not where all the exciting advances in web development are being made today. I don’t see why the future of Dart should be any different.

Still not statically typed

It’s surprising to me that Google would deviate so far from the semantics of JavaScript and include optional typing but stop short of actually having static type checking. Dart’s type annotations have no effect on the execution of the code and the compiler only reports warnings, not errors, for type violations. Essentially, it is as if someone took a dynamic language like Ruby and added type annotations to it without actually changing the way the language works. Because of this, it is possible to put incorrect and misleading type declarations in a program, for example declaring an integer variable as an array of strings, and still have the program execute correctly. C# has shown with its dynamic type that a mix of dynamic and static typing can be interesting and useful. A language showing another way of mixing the two with dynamic as the default would be very interesting. Unfortunately, that isn’t what Dart is. Dart is a dynamic language, plain and simple. Type checking is just a jsHint style suggestion that something might be wrong. What’s the use of a type declaration if it doesn’t mean the value will actually be of that type?

To dig deeper, check out Why Dart is not the language of the future by Rafaël Garcia-Suarez.

Not the Answer

Previously we saw Why TypeScript Isn’t the Answer, today we have seen that neither is Dart the answer. It is much more ambitious and addresses many more of the JavaScript mines, but ultimately, it will be doomed to go the same way as GWT and not be a major player in web development. That is mostly a consequence of it’s separate platform approach, but also because of a few poor choices in the design of that platform.

This article is Part 3 in a 6-Part Series.

]]>
Why TypeScript Isn't the Answer 2014-04-11T01:00:00+00:00 52814372-82e9-4945-916a-bb2dd7915c7b

I previously wrote about the minefield that is JavaScript programming and several possible answers to the problem. One possible answer is TypeScript. It’s an OpenSource project from Microsoft and the language “is a typed superset of JavaScript that compiles to plain JavaScript”. It builds on JavaScript by adding classes, modules, interfaces and optional type declarations. When compiled, the type declarations are erased and ECMAScript 3 compatible code is generated. When possible, TypeScript tries to match syntax and semantics to proposals for ECMAScript 6. Some parts of those proposals are still very much in flux and it’s not clear what the final spec will be, so we’ll have to see how TypeScript is able to handle that.

Fixes the Wrong Problem

TypeScript enhances JavaScript with types, classes and interfaces. Some people think that is the problem with JavaScript. It’s not. The problem with JavaScript is not that it is a dynamically typed prototype based object-oriented language without classes. That is actually JavaScript’s strength. The problem is that it is a poorly designed language, filled with many hidden land mines awaiting the unsuspecting developer.

I think that JavaScript’s loose typing is one of its best features and that type checking is way overrated. TypeScript adds sweetness, but at a price. It is not a price I am willing to pay.

Who Maintains Type Definitions?

TypeScript adds optional type declarations, but when interacting with existing JavaScript libraries there are no type declarations and a lot of TypeScript’s benefits disappear. To deal with that, TypeScript supports type definition files. These are hand written files that provide the missing type declarations for an existing JavaScript library. Having good type definition files for the JavaScript libraries you want to use is an important part of having a good TypeScript experience. Microsoft points to the DefinitelyTyped project as the source of type definitions for popular JavaScript libraries. However, what happens when the library you want to use isn’t popular enough or is too new? Or, what if there are type definitions, but not for the particular version of the library you need to use? Have you actually looked at how frequently many of these JavaScript libraries release new versions? How can you be sure the definitions are correct? They are just one more source of potential development issues. Any such library add-ons are bound to be an additional source of headaches if they are not maintained by the library author. Even if you aren’t using typescript, you may have already experienced a problem like this in other projects with community maintained packages. Recently, I did when the NuGet packages for the somewhat new Ember.js library were out of date, and when the package for jQuery failed to correctly support side by side installs of the 1.x and 2.x code lines.

Still JavaScript

The real problem with TypeScript is contained in the statement that it is a “superset of JavaScript.” That means that all legal JavaScript programs are also legal typescript programs. TypeScript doesn’t fix anything in JavaScript beyond some things that were fixed in ECMA Script 5. So, for example, the non-strict equality operator == is still there and still has the shorter more natural syntax than the strict equality operator ===. There is still the strangeness of semicolon insertion. In some cases, the additional features actually make it more likely a developer will adopt the wrong mental model of the language semantics and walk right into a mine. Classes make the unchanged behaviour of the this keyword more confusing. For example, in a class like Greeter from the TypeScript playground, the use of this is confusing:

class Greeter {
greeting: string;
constructor(message: string) {
this.greeting = message;
}
greet() {
return "Hello, " + this.greeting;
}
}

One can’t help but feel the this keyword in the methods of Greeter should always reference a Greeter instance. However, the semantics of this are unchanged from JavaScript:

var greeter = new Greeter("world");
var unbound = greeter.greet;
alert(unbound());

The above code displays “Hello, undefined” instead of the naively expected “Hello, world”.

Update A commenter (alleycat5 on Reddit) pointed out that TypeScript partially addresses issues with == because it will produce type errors for comparisons with == when it has type information.

var a = "ssdf";
var b = 5;
alert(a==b); // "Operator '==' cannot be applied to types 'string' and 'number'."

However, if either variable has type Object or any it will not produce an error and continues to evaluate loose equality. For those not familiar with TypeScript, any is a special type that can be either an object or a primitive like a number etc.

Not the Answer

I conclude that TypeScript is not the answer. Or perhaps it’s more accurate to say it is the answer to a different problem. If you love JavaScript, warts and all, but wish it had classes, modules, interfaces and static typing then TypeScript is the answer. My prediction is that in time people will come to realize TypeScript doesn’t eliminate the JavaScript minefield and only makes it more confusing by providing the illusion of safety. TypeScript will become just another tool along the web development roadside used by a niche market of developers.

This article is Part 2 in a 6-Part Series.

]]>
The JavaScript Minefield 2014-03-22T22:54:00+00:00 4c525d69-dee8-4d0e-b861-19bfa341eb67 When I was starting out as a programmer, I learned and worked in C++. There weren’t that many options for Mac OS 7 development at the time. I had a copy of MetroWorks CodeWarrior. It sure beat MPW, which was Apple’s own development environment. For languages the choices were pretty much Pascal, C or C++. Perhaps Pascal would have been a better first language to learn, but that’s not what I picked. As I really learned C++ and began programming in it, I discovered that C++ is a very large and complex language. Why? Well, there are a number of reasons. One is that it follows the zero overhead principle, basically “What you don’t use, you don’t pay for.” That means every language feature has odd limitations and pitfalls to make sure it can be implemented in a very efficient way. Another is that, due to the focus on low level efficiency, there are no safety checks built into the language. So when you make a subtle mistake, which is easy given all the weird edge cases, the program compiles and silently does the wrong thing in a way that maybe succeeds 99% of the time but crashes horribly the remaining 1%. Finally, the language is designed for maximum power and flexibility; so it lets you do anything, even the things you shouldn’t do. This produces a programming minefield where at any moment one might be blown up by some obscure behaviour of the language. Because of that and because other developers and the standard library make use of every language feature, one must learn the whole language. However, C++ is so big and convoluted, learning it is really hard.

C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do, it blows your whole leg off.

The New Minefield, JavaScript

My days of C++ programming are long past. But of late, I’ve been spending more and more of my time in a new programming minefield, Javascript. When I first started out as a web developer in the early days of .NET web forms, we avoided JavaScript as much as possible. It was slow and plagued by browser differences. Now, JavaScript: The Good Parts and jQuery are in our rearview mirror and we have our sights set on JavaScript MV* frameworks like Ember.js, AngularJS, and Backbone.js. That means, whether we like it or not, we’ll be writing a whole lot of JavaScript. Many people are fully embracing JavaScript, bringing us tools like Node.js, RequireJS, Lineman and Grunt. Those are all great things, but they don’t change the fact that JavaScript is a minefield.

How is JavaScript a minefield? Well, JavaScript has all sorts of pitfalls lurking for the developer. Each pitfall is like a mine in the minefield, silently waiting for you to accidentally step on it. Just like the minefield, JavaScript’s mines are hidden in plain sight. Entire books have been written about all the mines present in JavaScript. Maybe I’ll get into what some of those are in future blog posts. Now, if you are going to venture into a minefield, you need a way to avoid stepping on a mine. You need either a safe path through the minefield or a detailed map of all the mine locations.

No Safe Path

Douglas Crockford was trying to provide a safe path through the JavaScript minefield when he wrote JavaScript: The Good Parts. He did an admirable job at laying out a subset of the language that was sufficient but avoided many of the mines. However, the problem with any safe path through a minefield is that if you ever stray from the path, it doesn’t help at all. Sometimes you need to stray from the path, as when you read/modify someone else’s code or use a third party library requiring other language features. Worse than that is when you accidentally stray off the path, something that is all too easy in JavaScript. That happens when, for example, you inadvertently use a variable without declaring it, even though Douglas Crockford warned you not to do that. So we see that, the safe path is sufficient for your first foray into the minefield, but not enough if you plan on really getting work done there.

When Maps Fail

Given that a safe path through the JavaScript minefield isn’t enough, it seems like we need a detailed map of the minefield. Many books and blogs have been written to provide that map. JavaScript: The Definitive Guide by David Flanagan is one of the most detailed of those books. The JavaScript Garden is a good place to start learning about the mines online. In a real minefield, a map would let you navigate safely, but at the cost of greatly slowing you down. In JavaScript, it becomes necessary to hold the complete “map” in your head while you are writing code. It’s just not possible to constantly be looking things up while coding. Also, you still need to remember when to look something up. The task of programming is already difficult enough as you try to juggle the problem domain and solution while expressing that to the computer without the added cognitive load of keeping the JavaScript minefield map in your head. I would say most people can’t do it, or at least can’t do it well. That is one of the primary reasons languages like Java and C# have beat out C++.

Furthermore, many of the JavaScript mines are easy to step on even when you know they’re there. For example, you might use a for in loop without checking hasOwnProperty because you mistakenly think that it is safe in your particular case. Beyond that, the human mind just isn’t set up to handle situations where things do not match their names. No matter how many times I tell myself that the this keyword doesn’t mean this, my brain still falls back into thinking of it that way. It’s as if someone put a sign reading “Cave Tour” over the bear den. When I’m busy talking to my friend, thinking about how hungry I am and trying to find the cave tour, I’m liable to walk into the bear den no matter how many times I was told the sign is wrong. The fault isn’t with me, it’s with the sign.

JavaScript History

So how did we end up in this JavaScript minefield? Well like a lot of minefields, it exists because of a lot of messy history. JavaScript was created in 10 days in May 1995 by Brendan Eich. He incorporated many great ideas from Scheme and prototype based object oriented languages. However, the syntax and certain other features of Java got tossed in too, and there were plenty of warts for better or worse. It’s just not possible to design a language in such a short time and get everything right. Since then, the language as evolved through a messy standardization process that was haunted by the browser wars. Today JavaScript is experiencing a renaissance, but it can’t escape from its history.

Can We Clear the Minefield?

So what are we to do about this JavaScript minefield if we aren’t prepared to just accept life in a minefield? Well, a lot of people have been working on that and trying to offer solutions. Those solutions fall into three categories, since hey, there is only so much you can do with a minefield. The three possibilities are:

  • Clear the minefield
  • Go to a different field
  • Build atop the minefield

The most direct approach would seem to be to actually clear the minefield by removing the mines. Unfortunately, in the case of JavaScript, that means actually changing the language. That is a very long and difficult proposition. None the less, that has been done some through the ECMAScript 5 standard, which fixed the fact that the developer could redefine undefined and NaN as well as adding strict mode to address other mines. Still, that isn’t enough for those who need to work in JavaScript now and the process doesn’t show any signs of clearing the minefield in the near future. Instead, it looks like ECMAScript 6 will just expand the minefield to encompass more area (to their credit, the new territory looks a lot better and safer than the old).

Then there is the approach of going to a different field. Unfortunately, there aren’t any good alternatives because JavaScript is the only language that runs in the browser. Various companies have attempted to solve this problem by providing plug-ins and other languages for the browser. There was Java applets and then Adobe Flash and more recently, Microsoft Silverlight brought .NET to the client. Before that, Microsoft had tried to create an alternative by putting VBScript into IE, but no one else jumped on board. Unfortunately, all of these haven’t succeeded in providing a truly cross-browser programming eco-system that developers could count on being installed on client computers. Most created a sandbox within the web page, rather than integrating with it like JavaScript. While all those technologies still exist, none of them is a serious contender for the future of rich web app development.

The last option is to build atop the minefield. This is where the minefield metaphor breaks down a little. But to push on, it is like building a deck over the minefield so that one can enjoy the nice view and safely walk above the mines without any risk of setting one off, even though they are still below you. One of the first important such technologies is GWT from Google. It provides a true Java to JavaScript cross compiler. Since then many different projects have appeared to provide cross compiling from various new and old languages into JavaScript. The asm.js project is even standardizing a subset of JavaScript as the target of such compilers. The rapidly growing support for source maps, which allow debugging of cross compiled languages in the browser, promises to significantly reduce the friction of using such tools. Currently, all the most promising technologies for solving the challenges of JavaScript fall into this final category.

Today, the three main contenders for superseding JavaScript are CoffeeScript, TypeScript and Dart. However, it’s not at all clear any of them will be able to dethrone JavaScript despite it being a minefield. In future posts, we’ll look at each of these three alternatives to JavaScript and whether we should adopt them or stay with JavaScript, perhaps with the aid of a tool like JSHint.

This article is Part 1 in a 6-Part Series.

Edit 2018-01-10: Removed dead link to DailyJs.com about the history of JavaScript.

]]>
1d9ae541-a29c-4c3d-b148-44802594427e