Software Engineering Mini Project

For my university Software Engineering mini project, we were tasked with creating a next generation ATM machine for First National Bank. For the implementation phase, we were assigned teams of five, and my team was allocated the client card information subsystem. I took charge as team leader, and am very proud of our team’s results.

Planning Phase

Domain Model

Requirements Definition

The Card Authentication subsystem (CRDS) provides an authentication service for bank cards and NFC-cards as a service to the AUTH subsystem.

9.3.1 Functional Requirements

R 11 The CRDS will provide a service when given a (Bank card ID or NFCcard ID), to return the userID of the client who owns the card. If the cardID is not found or if the user is de-activated or suspended respond with NotAuthenticated-exception.

R 12 The CRDS will provide a service when given a (Bank card ID or NFCcard ID) and a PIN, to authenticate the client. If the cardID is not found or the user is de-activated or suspended, respond with NotAuthenticatedexception. If the given PIN does not match the PIN associated with the card,
Return the userID of the client who owns the card and report the authentication failure. Otherwise Return the userID of the client who owns the card and report authentication success.

R 13 The CRDS must maintain a database containing card information i.e. ClietID, CardID and PIN. PINs must be encrypted.

R 13.1 Provide a service when given a clientID, to de-activate all the client’s cards.

R 13.2 Provide a service when given a clientID to randomly assign one or two cards to the client and add a record for each card. For each card generate a PIN and use a service provided by NS to send the password to the client via e-mail.

R 14 The CRDS will log all its events in a log file and push the file to the Reporting subsystem (Section 9.9) using a service provided by the reporting subsystem.


The changing of passwords was beyond the scope of this project

Use Cases

UC1.1 Read bank card and identify the ClientID of the bearer of the card

UC1.2 Scan an NFC card and identify the ClientID of the bearer of the card

UC1.3 Obtain the PIN associated with a read/scanned bank/NFC card and verify that it matches the stored PIN for the card

UC4.1 Add a new client

UC4.4 De-activate a client

UC7.2 Log a successful or failed authentication action

UC7.6 Generate a report based on logs produced within the system

Traceability Matrix

Team Management

As team leader, I wanted to ensure communication between the team happened smoothly. To manage progress of the project and to ensure we knew who was working on what and when, we used Trello. Trello allowed us to create lists of items to-do, doing and done. Each item allowed comments to be added, and the assigning of one or many team members.

A difficult aspect of team management is ensuring that time constraints are met. I didn’t want to be a droning evil overlord, forcing everyone to complete their work – so we assigned multiple members to each task. This meant that while one person may implement the functionality, another would be testing it and checking the code quality. The two members assigned to the task would push each other to complete it, and provide better ideas or alternatives to improve it.

We also promoted an open atmosphere – if team members wanted to make suggestions or ask for help, they were welcome. This meant that team members were more honest about progress, and solutions to problems that would not work were identified early.

We primarily used WhatsApp for communication, but we also used Slack for GitHub notifications and keeping track of changes. WhatsApp worked well because everybody had it, and it has underrated powerful administrative functionality, making it quite a powerful tool for group communication.

We met regularly throughout the project on campus, and would have group sessions discussing how to implement certain requirements and ensuring that our system was making sense. We also had programming sessions, in which we quickly finished features that required a lot of team communication or were difficult to implement.

Implementation Phase

System Architectural Design

The overall system was designed as a microservices architecture – each subsystem was designed as its own self-contained server, which exposed API endpoints to other subsystems to consume.

We designed our subsystem using a persistence architecture for storing and managing client card information, and a layered architecture realised using MVC for the rest of the system (JSON REST api is the view, controllers handle business logic and the model is the persistence layer).

Design Philosophy

Right from the get go, we decided we wanted to make a functioning system – one that could be considered close to production-ready. We decided to build the system modularly such that each member could work semi-independently without major conflicts in schedule.

We took note from the principles of Agile development – we tested often and early, creating unit and integration tests as we developed the system, and we kept documentation to a minimum, mostly relying on self-documenting code and inline documentation where code needed explaining.

To make the system as modular as possible, we created classes early that could help us create functionality later down the line, ensuring that our system reduced repetition of code and inconsistency while maximising cohesion and low coupling.

Technology Choices

We decided on Node.js with Express.js to build our subsystem. Node.js is easy to learn, and most members of the group had at least some experience with Javascript. Express.js was an obvious choice as a server framework, as it is easy to get started with, and is extensible and modular.

To improve coherence and reduce error, we decided to use Typescript – a typed Javascript transpiler – rather than Javascript. This allowed for easier documentation by specifying types, and assured that implementation went smoothly without any questions about what functions should accept what parameters.

We hosted our server using Heroku, with which we set up automated deployment from our master branch, and also set up a staging server with Heroku which automatically deployed from our development branch. This allowed us to test functionality with the working server before pushing to production and accidentally overwriting database information.

We used TravisCI for continuous integration. It allowed us to automatically run unit tests and integration tests on pushing code. This ensured code quality and provided us with an indication that the system was working in the ways we intended.

Our unit tests and integration tests were implemented using Mocha and Chai.

Building a modular, consistent system

Environment variables

We used environment variables in a .env file in the root of the project. These let us define whether we were in development or production mode, and limit use of other systems’ resources (for instance we disabled sending emails when unit tests were performed – instead mocking them).

In development mode, the system would use a local sqlite database, and in development would use a database provided by Heroku. This made getting into development easier

Database handling

We used TypeORM as our object relational mapper. This let us set up typescript classes as our models very easily, and make database management a breeze.

It also allowed us to switch between different database providers without modifying any additional code, meaning we could try many databases to find the one that worked best for our subsystem, and potentially change it in future if need be.

Custom abstract Route class

To make routes significantly easier to manage, I created a custom abstract class called Route. This class would be extended by concrete routes when implementing functionality.

This implementation meant all routes were defined in a standardised way, which was manipulable from an abstract source, rather than using arbitrary functions.

Routes required certain properties be added to use the route. The first of which was an example response. This is a JSON representation of a response that the user might get when consuming the endpoint. This would be used for documentation, and for scaffolding the routes such that we knew what the expected result of our functionality should be.

Routes could also have a list of RouteParams added. A RouteParam contains a name of the parameter, a valid example of the parameter, and an asynchronous validator function (promise). This is used by the route to automatically check if the request body contained the correct parameters and if they were in fact valid. The valid example is used in unit and integration testing, and also for presenting consumers with a valid example.

A description of what the route does and a list describing any side effects that the response did not yield (e.g. sending an email) was also added to the class.

The route class also managed any exceptions thrown and would pipe them through using Express’s next() function, which would be handled by our error handler middleware.

An example route:

JS

Custom route test suite class

To further improve productivity, I wrote a test suite builder for routes. The RouteTestSuite class uses the builder design pattern to allow easy integration testing of the route, and function chaining allowed for building the test suite like one would build a jQuery query.

Before each test suite runs, it starts up a development server and clears the database. Then it runs each of the tests the user specified. After running the tests, the server would be closed and the database cleared again.

By allowing our developers to implement tests this way, we had a single controllable interface that allowed us to modify how all integration tests could be run without changing much code. An example of where this was needed later was when TravisCI was running tests, but on error would still think it was okay. This was because the server would keep running on exception and the Travis test run would time out, giving an OK. To counter this, we made the process exit with an error code when an exception was encountered.

The RouteTestSuite class allows for the automated testing of invalid parameters and missing parameters by calling .testInvalidParameters() and testMissingParameters() respectively. This saved a lot of copy pasting code.

The .add() function provides the developer with an easy interface to create a test without importing libraries that may change in future (e.g. if we swapped chai expect for a different library). It also provides the developer with access to the database singleton without having to import the database and call getInstance() yourself.

Its functions being asynchronous meant we could use async await to drastically improve our lives by simplifying code.

The preamble function allowed the developer to alter the database or perform any necessary changes before the test in a standard manner.

JS

Testing setup using mocha

When we first started unit and integration testing, we were using a single file. This proved to be problematic very quickly, as we had over 1000 lines within a few days.

To combat this, we split the file up into .spec.ts files. These files contained the tests for the matching class with the same name (e.g. database.ts’s tests would be in database.spec.ts in the same directory). This helped making unit tests easier, as it was easier to manage the files.

Mocha would then scan the source directory for all .spec.ts files, and perform all the unit and integration tests.

Automated API documentation generation

Because we were working with other teams who needed to consume our service, we wanted to have up-to-date API documentation at all times. To accomplish this, we developed a custom class that would visit all of the routes and produce a documentation page accessible at the homepage of the server.

Some cool implementation details

Secure hashing

To protect the generated pin codes of our clients, a class was created to securely hash the pin code using sha512, then salt the hash using a sha256 hash of the first 40 characters of the resulting sha512 hash, then perform a sha256 hash on that salted hash.

While this was a little overboard, it would be near impossible to break.

Generating card numbers

Card numbers were generated to be actual FNB bank card numbers. Taking one of the generated card numbers and googling it would result in showing a valid South African mastercard debit or credit card from FNB.

This showed that the system could work with real-world data, and was mostly to impress the FNB clients.

Redacting logging

Our logs sent all requests with their IP address, request body and headers, response body and response code to the logging/reports subsystem on a regular basis. An issue with this was if an incoming request contained a pin code, we didn’t want this to be visible.

As such, we redacted all pin codes from the logs before storing them and sending them to the other subsystem.

Conclusion

Our subsystem ended up a roaring success. On demo day, it worked perfectly with the other subsystems, and the lecturer received a generated card number and pin code via email on request through another subsytem. Authentication later worked, too.

I learnt a lot about team management, and feel that my skills have improved in this area. I feel I still have much to improve on, but am happy with my progress.

My team and I thoroughly enjoyed the project, and we worked and communicated consistently, resulting in an end product we are proud of.

Group 14 CRDS