The Dovetail Blog

Here's what's in a name

Project names

 

One topic I get enthusiastic about when starting a new project is how to name the system under development.

Why is a good name important?

  1. It promotes clear communication between stakeholders, and clarity is a Dovetail core value. I worry when a generic term like “the system” is used in a meeting - inevitably somebody is left wondering “Which system exactly?”
  2. It gives the nascent software system its own identity. This helps stakeholders to engage with the project even though it may still be abstract to them. They can visualise the solution better when it has a name, leading to more creativity and thorough analysis.

So what makes a good name? Here are my suggestions:

  • It should be unique rather than generic. If it stands out a little it helps give the new system its own personality.
  • It should be a single word, so short that it never occurs to anyone to abbreviate it in speech or writing. This promotes consistent use by being the easiest way to refer to the new system.
  • Its pronunciation should be unambiguous. This removes the fear of saying it "wrong", another barrier to universal adoption.
  • Don't try to describe the project in its name. You will probably end up with something cumbersome. The name will also be prone to irrelevance as the project grows and evolves.
  • The meaning of the word really doesn't matter, so don’t sweat about it too much. Of course it can be a nifty acronym or something related to the project, but it can also just be a word that sounds good. Like a child, the project will grow into its name, everyone will get used to it, and eventually you won't be able to imagine any other name sounding right.
  • Don't worry about the permanence of the name. You’re just choosing something for internal use by stakeholders. If the system is launched to a wider audience you can give it a public-facing name at that time, and it will probably be better than anything you think up at this stage.
  • Do get buy-in from key stakeholders. Your goal is universal adoption: people find this surprisingly easy when their boss loves the name!

Here are some good examples of actual Dovetail projects:

  • HARPS
  • Hermes
  • Athena
  • Seagull
  • Osprey

HARPS was a neat acronym we laboured over when the project started years ago, but nobody remembers what it means now. Hermes is a project for a sports body, so we named it after the Greek god associated with sport. Athena was a seemingly random suggestion by a client after I shared my guidelines above.

As for the last two: when we’re stuck we just pick a bird’s name. It works every time, showing how unimportant the actual word is!

 


Custom JavaScript parser vs Jison - Our experience

 

We recently announced QuickDBD, a simple product we made for drawing database diagrams by typing. If you take a look at the QuickDBD app you'll see it converts source code into a diagram. What we needed to make this work was obviously a parser.

After a bit of research on how to approach this problem, we knew that we would have to use either an existing parser generator or build a custom parser ourselves. After narrowing the choices down a bit, PEG.js and Jison emerged as the two most popular JavaScript parser generators at the moment. Out of these two, Jison seemed to have slightly bigger community - a bit more GitHub followers, more StackOverflow questions and a slightly better documentation. It seemed like a better bet so we decided to spend a bit of time playing with it and to try to make it parse the QuickDBD syntax.

We managed to make it parse the first version of our syntax we had a few months back pretty fast. But since the language we came up with for QuickDBD is closer to a data description language than what most people would consider a programming language, we started hitting bumps in the road pretty quickly as well. We soon ended up having to handle multiple edge cases we weren't able to with just Jison and what that meant was overriding Jison behaviour and injecting custom bits of JavaScript into it.

That kind of felt pretty messy so we talked a bit about it and made a decision to go with our own custom JavaScript parser for several reasons:

  • we would have complete control over how the parser works
  • everyone here is very well versed in JS
  • Jison was new to everyone and there is a bit of a learning curve in being able to do stuff with it efficiently
  • it felt as if we were fighting Jison to make it work something it wasn't supposed to more than it felt it was this great tool that was would empower us to do things better and faster
  • a couple of times it was pretty hard to get information on how to do something with Jison so we had to fall back to reading it's source code to figure things out
  • it didn't feel like the right tool for the job

We however did pick up some ideas from trying it out and I believe it made the custom parser we came up with that much better. We wrote a parser that's fairly small, fast and easy to read, expand and fix - which is ultimately what we needed.

I still think Jison is a great tool but it just wasn't a very good fit four for our needs. If you're considering using it, perhaps try it out on a smaller subset of features of your language first and see how you like it before committing to it. You can always go back to writing something custom after you tried it out.

I also recommend you read this very good parser generators vs custom parsers SO thread with pros and cons for both sides.

Hope this helped!


Hello QuickDBD!

Quick Database Diagrams

For the last couple of months we've been working on a side-project here in Dovetail. Martin and Trevor wanted a tool to quickly draw/prototype database diagrams by typing. So, we're happy to announce QuickDBD! We decided to wrap it in a shiny design and make it a little product which we hope others will find useful as well. In time, if there is enough demand we'll expand the feature set. If you have any ideas or suggestions, please let us know on our roadmap Trello board.

In the process of making QuickDBD, a lot of cool, interesting technologies were used and no programming languages were harmed! We used things such as AngularJS, Typescript, JointJS (for diagram rendering - awesome library!), Karma and Jasmine (for testing), Angular Material and SASS on the front-end, .Net WebAPI, xUnit and MS SQL on the back-end and we automated our build-test-deploy pipeline with bower, gulp, TeamCity, Octopus Deploy and Azure. A very interesting journey!

We hope you like QuickDBD same as we do. If you have any feedback, please let us know!


Integrating Karma code coverage with TeamCity

To unit test our Angular apps we use Karma test runner and Jasmine testing framework. Locally we run these tests using a gulp script that takes care of the whole app building process. To ensure nothing is broken before publishing the app to production we run our tests during the continuous integration process using TeamCity.

This post expects you to have a gulp testing process already in place and it won't cover that part. It also expects you to have a working TeamCity setup in place. The post will only help you integrate Karma with TeamCity as an additional build step so you would get something that looks like this in your TeamCity.

Number of passed/failed tests:

The code coverage tab:

There are a few requirements before we can make this work. To help you better understand our setup, here is a sample project structure that we have:

The first thing to do is ensure you have the following npm packages installed and that they are saved in your package.json file:

"karma": "^0.13.22",
"karma-chrome-launcher": "^1.0.1",
"karma-coverage": "^1.1.1",
"karma-jasmine": "^1.0.2",
"karma-phantomjs-launcher": "^1.0.0",
"karma-teamcity-reporter": "^1.0.0",

Next ensure that you have the following set up in your karma.conf.js:

  • "coverage" and "teamcity" in the reporters list
  • "PhantomJS" in your browsers list
  • singleRun set to true
  • our coverageReporter configuration looks like this (this part is pretty important):
coverageReporter: {
  dir: 'coverage',
  reporters: [
    { type: 'html', subdir: 'html' }
  ]
}
  • set the preprocessors configuration to something like this:
'path/to/code/you/want/to/tests/*': ["coverage"]
  • NOTE: we do not have the plugins property set up
  • the rest of options are pretty much standard - add/remove what you need

Now that this is all set up, go to your TeamCity. This is essentially how our client-side build process looks like:

The step that is the main interest of this post is the "Run Karma Tests" step. Here is how we have it set up (create a Command Line step):

This is a slightly modified version of what Karma documentation recommends. The difference is that we are forcing the use of local Karma module and we specify the configuration as a command line param like this:

node node_modules/karma/bin/karma start karma.conf.js

The last piece of the puzzle is setting up the coverage artifact. Go to the General Configuration Settings of your project in TeamCity and add an additional coverage artifact path (the second line):

The important bit (it's simply where our coverage html files are located):

Project.WebApp/coverage/html/** => coverage.zip

Go back and see how we have the coverage/html folder in our project structure. It is set up by coverageReporter karma.conf.js property. This artifact path will take all the files from the coverage/html folder and will compress it into a coverage.zip archive. After the build process finishes, TeamCity will (if it's is able to find the coverage.zip archive inside the artifacts folder) automatically import it as code coverage for the project and you will be able to navigate to the "Code Coverage" tab for that specific build. If you have any tests that don't pass, this will also fail the whole step, stop the build and prevent it from ending up in production.

Hope this helps. Cheers! :)


Visual studio 2015 real-time CSS editing

I was working on some updates to MenuCal this week. This morning while completing some CSS styling on a new form, I discovered that the CSS was being updated in real-time in Chrome as I made changes in visual studio. This is a huge improvement to my workflow, as I like to style and preview as I go. I was able to drag my CSS editor over to another screen and work away while the styles in Chrome updated instantly. No more hitting save, and refreshing the browser. Thank you Visual studio 2015!! I've seen other tools do this for quite some time, but it's nice to see it in the IDE I use every day.

But let's not get too excited. I mean who uses plain old CSS anymore? We've been using SASS on new projects and unfortunately this lovely little feature is not present out of the box for SASS. I will take a look around at VS plugins that might do that, and report back if I find some elegant solutions.

Update 2nd August, 2016: I tested out one of our projects with SCSS and Sassy Studio. While it's not as elegant as the live CSS preview, it does detect the CSS changes after they are compiled, and the browser updates the CSS.


This week in tech

I'm jotting down some notable tech news we've been discussing internally (in our slack #techtalk channel) this week.

We use New Relic on a number of applications, it's a great tool for highlighting performance issues in applications. Microsoft has always been somewhat in that game, but their new offering built into Azure is called "Application Insights". It looks to be a direct competitor to New Relic. It also has logging and a query engine to go with it, so it may also be aiming for cloud logging providers too (like Log Entries). https://azure.microsoft.com/en-us/documentation/articles/app-insights-overview/.

Trevor uses a mac (boo!), and we're a Microsoft development house. At times he struggles to find the right tools to work in a primarily windows environment, and he usually resorts to a virtual machine or RDP. We recently found this tool called Wagon (https://www.wagonhq.com/), and Trevor has been using it and enjoying it. Wagon is built on Electron, another tool we have been keeping an eye on lately. Fabrizio is especially enamored by it.

Apparently we care about API versioning. I'm not sure, but other people care about it more than me: Your API versioning is wrong, which is why I decided to do it 3 different wrong ways.

VHS won! But only barely. https://www.theguardian.com/technology/2015/nov/10/betamax-dead-long-live-vhs-sony-end-produtionhttp://news.sky.com/story/remember-vcrs-production-to-end-as-sales-slump-10509632.

Javascript jokes are so hot right now:

Lastly, John found this. Have we gone too far?


Dovetail and Irish Rail launch Online Payments for Fixed Payment Notices

This week, Irish Rail launched the Online Payments facility for Fixed Payment Notices (which are penalties for fare evasion and other infringements).

The Dovetail-developed system allows passengers to pay a Fixed Payment Notice online. It is mobile-friendly and allows customers to pay a Fixed Payment Notice on their mobile, tablet, laptop or desktop computer.

The system is built using ASP.Net, C#, CSS and HTML5 and it is integrated with the Irish Rail Fixed Payment Notice Management system (a version of the Standard Fare Backoffice Management System which Dovetail previously developed for Dublin Bus).

Our work with Irish Rail, LUAS and Dublin Bus is all part of Dovetail's continued involvement with the transport sector.

The following article appeared in the February 2016 edition of Rail Brief, the Irish Rail staff magazine.  You can view the PDF here.

John and Martin with the Irish Rail Team in Connolly Station.

A FINE NEW SYSTEM

In 2015 there were 9,606 Fixed Payment Notices issued. There was a 22% increase in the number of Fixed Payment Notices issued in 2014 compared with 2013 and this trend remains in an upward direction putting more pressure on the system in use. As a source of revenue for us, it is critical that there is an intelligent information system to ensure detailed reporting and timely payment of fines.

Main Triggers for the New System

1. Two separate systems existed, one for DART and one for Innercity
The back office was using two disparate systems; Access and Infopath as both the Intercity & Commuter (ICCN) and DART had individual systems. This meant inputters were moving between systems with differing designs. These disparate systems continued when the RPU was centralised, meaning that the Head of Revenue Protection and the Revenue Protection & Prosecutions Manager had to interrogate each system separately and add the results together. Often they had to physically count original fines for statistics purpose as the original system didn’t allow for any meaningful interrogation. Another issue with the existing system was there was no single view across one person. A person could have a fine on the DART database but the ICCN database had no visibility of it.

2. Everything was manually typed
Prior to the new system, everything was manually typed, for example, there were no drop down boxes with list of stations, Revenue Protection officers’ names, train times or routes. This
lead to the likelihood of poor quality data as typing errors/spelling errors could occur due to the high volumes to be input.

3. Inconsistent design between forms and databases
The fields on the screen and the form didn’t match. As a result, it slowed down the speed of inputting as everything on the screen had to be matched to the form field for input. This contributed to a growing backlog and as a consequence, reminder notices, at times, were late going out to customers. This type of backlog can be very demotivating for an employee – no matter how hard the team worked, there seemed no end to it! 

4. Databases were not built for high volume
There were over 38,000 records on the databases which were not built for high volume and as a result crashes often happened. Up to eight people could have been inputting at any one time
and the input may not have updated correctly. The resulting consequence was that a letter could go out to someone who has already paid a fine.

Leading the Change

Roger Tobin, Head of Revenue Protection, has been leading the change project with support from Dave Cannon Manager Revenue and Prosecutions and Shauna Fitzsimmons on the systems side. The back office team have also supported the change process. The team worked with David Bettles Information Systems, Keith Faherty Online Manager, Group IT and Customer First in specifying and clarifying what the system requirements were before Dovetail could commence their work.

Communications and Training

The team had been briefed on the full extent of the system change. These briefings were supported by the Customer First, People and Communications Lead, Linda Allen and were made
by Dave Cannon and Shauna Fitzsimmons. A training test system was set up by Dovetail to ensure all the team were comfortable with the system before it launched. They all found the system to be very straightforward and could really appreciate its benefits. The Dovetail systems supplier facilitated the training for all involved. They also provided systems support for the team to ensure the team were fully supported in the ‘go live’ and beyond. Brian Quinn, Business Process Lead, documented the new processes arising from the implementation of the new system. This was to ensure there was no ambiguity in the implementation and ensured the process in place was the optimal one.

Phase 2 Online Payment Facility

Work is currently ongoing in setting up an online payment facility with a Go Live expected in February 2016. Currently there are limitations on payment options as a customer can only pay
during office hours, Monday to Friday 9am to 5pm. There will be huge benefits to the customer to pay online anytime as the back office team had received complaints from people who wanted to pay but couldn’t get through. This will also mean a reduction of phone calls to the office to allow the employees allocate their time on the key tasks of managing repeat offenders, analysing areas to target and managing files for maximum court prosecutions. 

Phase 3 Customer First

Customer First is currently looking at electronic solutions to make the RPU more efficient. Currently Revenue Protection Officers write out Fixed Payment Notices (FPN’s) which would mean real time inputting. There will be real benefits in the adoption of these portable devices. 

Benefits of the New Dovetail System

One of the biggest benefits for the team is the removal of the backlog. All their hard work has significantly contributed to this. Other benefits include:

1. One single view of a ‘customer’
The new system can highlight fraudulent persons or highlight repeat offenders. It is able to supply fraud lists or repeat offenders across both systems.This allows for a more intelligent type of reporting and more successful prosecutions. 

2. Better targeting of fare evasion
It allows the RPU team to more intelligently target times and services where there are fare evasions above average. The system allows them to interrogate information by multiple fields
e.g. by station, by time, by ticket types, by day of the week and by any other fields stored. The new system has all the information in the one place, it reduces the dependence on physical files.

3. One single system in place and customisation of screens
There is now one single system in place for all the back office team capturing all Railway Undertaking fine data. Customisation took place for ease of use for the inputter on all screens. The new screens mirror the FPN form and will follow the fields of the form as it appears on the page.

4. Template letters created for all scenarios
Template files for all types of letter have been supplied to the new system and can be generated automatically.

5. Preloaded lists and drop down boxes
The new system will have these all lists preloaded along with the actual timetable. It will also have an address link with google maps eliminating the need for freeform typing.

6. Appeals process standardised
The time spend on appeals has reduced as the appeals process has been standardised and the appeal is done via email with the addition of the attachment on the system.

7. Flexible to change
The new system is more flexible to change. The systems allows the addition of new routes, times, officers and can allow the addition or amendment of any fields.

 


How to squeeze eighty-seven thousand words into 74 pages

Sorry for the click bait title - I just wanted to share this interesting screenshot.

Below is a screen shot overview of a Dovetail tender document. I was interested to see how many images we use: 70 in this case. This is pretty typical of a Dovetail proposal. The pictures include such things as examples of previous work, suggested approaches for the project under discussion, UML diagrams and some of our corporate bonafides.

Birdseye of tender document

The images aid clear communication (one of our corporate values) and they also break the text up to make the document more approachable. And the proof is in the pudding - we won this particular tender :)

So if a picture really is worth a thousand words then this tender document contains 70,000 due to the images alone, and 17,523 that we wrote, giving a whopping total of 87,523!


Team City - Update Packages

Here at Dovetail we love Team City and Visual Studio.

We recently updated our Team City configuration to allow projects to be built using Visual Studio 2015, C# 6, and to use the latest Nuget package manager.

In doing so, we discovered a very peculiar setting deep within Team City that caused one of our projects to break on build and break once deployed.

The Build Failures

After updating, we ran our build and the compiler threw an error saying that it could not find a specific version of a Nuget package. For example, our packages.config within Visual Studio specified we use Nuget to install Newtonsoft.Json version 7. However Team City reported that the project needed Newtonsoft.Json version 8.

We made the decision to update all affected nuget packages to the latest versions, pushed our project and Team City built it successfully.

The Deploy Failure

We then ran into our next problem. The project was deployed but there was nothing on the screen. We opened up Chrome developer tools and found that JQuery was missing. This is a project that uses a lot of JavaScript files and it built and deployed with no problems before.

Looking back at our Octopus Deploy package we found that the JQuery file we were referencing and pushing to our repository was not there any more. However, we did see the latest version of the JQuery min file. Our file was being removed and replaced with the latest JQuery min version.

The Update Package Setting

We soon found the setting buried deep inside the Team City "build steps" screens:

 undefined

Within the NuGet Installer build step is a setting which, when turned on, updates all your packages. This sounds great in theory but when you run into build and deploy issues this will cause headaches.

undefined

 

The text underneath states "Uses the NuGet update command to update all packages under solution. Package versions and constraints are taken from packages.config files". Whether this is a bug in Team City or not, this text seems very vague for an "Update Packages" function.

Be careful, because when checking this check-box, Team City will not read the packages.config version numbers and instead it will download the latest version of every package.

Update: Team City have been back to us and they're going to update the explanatory text on this checkbox to make it more clear.