Team2 ja CSC – Tieteen tietotekniikan keskus yhteistyöhön

CSC – Tieteen tietotekniikan keskus on tehnyt IT-palveluhankintojen sopimuksen Team2 Oy:n kanssa.

CSC ja Team2 Oy solmivat puitesopimuksen tietojärjestelmien teknologia ja integraatiopalveluista. Loppuvuonna 2015 käynnistyneen puitejärjestelyn sopimuskausi on 2015–2017, ja se sisältää kahden vuoden jatko-option vuoteen 2019 saakka.

Puitejärjestelyn piirissä on useita oppilaitoksia ja yliopistoja, kuten Haaga-Helia-ammattikorkeakoulu, Turun ammattikorkeakoulu, Lapin yliopisto, Lappeenrannan teknillinen yliopisto, Karelia-ammattikorkeakoulu, Jyväskylän yliopisto sekä Metropolia-ammattikorkeakoulu.

Team2 Oy mukaan kehittämään uutta HSL reittiopasta

Team2 Oy mukaan kehittämään uutta HSL reittiopasta

Uusi reittiopas kehitetään HSL:n ja Liikenneviraston yhteistyönä. Sekä HSL:n reittiopas että Liikenneviraston Matka.fi –palvelu ottavat käyttöön uuden reitti- ja aikataulupalvelun alustan.

Uusi reittiopas on usean toimittajan kehityshanke ja sitä toteutetaan avoimen lähdekoodin projektina. Lähdekoodi löytyy  GitHub-palveluta.

Visma Consulting toimii yhtenä HSL:n puitesopimustoimittaja ja Team2 Oy toimii projektissa yhtenä Visman alihankkijoista. Team2 Oy liittyi mukaan reittioppaan kehitykseen syyskuussa.

Hankkeessa hyödynnetään myös lukuisia valmiita avoimen lähdekoodin projekteja kuten mm. Pelias (Geokoodaus), OpenTripPlanner (reititys), sekä myös avoimia tietolähteitä kuten esim OpenStreetMap, MML paikannimet sekä Liikenneviraston ja HSL:n avoimia rajapintoja.

“Nykyään yhä useampi projekti toteutetaan avoimen lähdekoodin lisenssillä. Tässä projektissa yhdistyvät avoin lähdekoodi ja avoin data ennennäkemättömän sulavasti.”, kommentoi Team2 Oy:n toimitusjohtaja Jani Wilén.

Lue lisää hankkeesta: HSL ja Liikennevirasto kehittävät koko Suomen reittiopasta

LISÄTIETOA

Jani Wilén, Toimitusjohtaja
E-mail: etunimi.sukunimi(at)team2.fi

HELSINGIN SEUDUN LIIKENTEESTÄ (HSL)

Helsingin seudun liikenne (HSL) on kuntayhtymä, jonka jäseniä ovat Helsinki, Espoo, Vantaa, Kauniainen, Kerava, Kirkkonummi ja Sipoo. HSL aloitti toimintansa vuonna 2010.

HSL:n järjestämässä liikenteessä tehdään vuosittain noin 353 miljoonaa matkaa. Kuntayhtymässä on 400 työntekijää.

Easy Backup and Restore for you ElasticSearch Entities

data-475553_640

Elastic search is a great search/discovery engine from elastic.co. It is simple to set up, [What isn’t nowadays thanks to Docker] feature rich Apache Lucene based search engine in many ways similar to Apache Solr.

Traditionally Solr & ES are mostly used to provide more visibility to your data though adding search and browse capabilities. Often this is done by indexing an already existing application data to the search engine. In this kind of setup it might not be a big deal whether the index data is backed up or not because it is possible to rebuild the index from the master data when the bug in your application deletes the index data.

Recently,my gut tells me that the trend seems to be moving towards leaner approach where people are starting to use ES also as not only a search engine but also as the primary data storage layer for business application. I am not surprised because both Solr and ES are fast compared to many other data stores around, they are feature rich and ridiculously scalable data slicing and dicing platforms that have been proven to be production ready for ages now.

In this blog post I  am going to present a simple, still in many cases usable way to do backups of your index data to a remote server.

Elasticsearch offers nice snapshot/restore api that can be used to take backups of the index data. By default it can make remote backups to Shared File System Repository. Additional plugins also provide support also to store backups  to AWS, HDFS  and Azure. We’re going to use the Shared File System repository since it allows us to use any ssh accessible remote box as a backup server. I am also going to use Docker here which is used just for convenience.

Pull in ES container

docker pull elasticsearch:latest

Start it

docker run --cap-add SYS_ADMIN --device /dev/fuse -d -p 9200:9200 -p 9300:9300 elasticsearch

You should be able to see the elastic search server running as a container:

[sam@localhost srvr]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
16f87173f7e9 elasticsearch:1 "/docker-entrypoint. 4 seconds ago Up 3 seconds 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp romantic_bell

The data storage is now ready to be used, Let’s index some content with httpie:

[sam@localhost srvr]$ http PUT http://127.0.0.1:9200/myapp/users/sam id=sam firstName=Sami lastName=Siren password=secret
[sam@localhost srvr]$ http PUT http://127.0.0.1:9200/myapp/users/foo id=foo firstName=Foo lastName=Bar password=secret

After indexing the data is searchable

[sam@localhost srvr]$ http http://127.0.0.1:9200/myapp/users/_search?q=bar
HTTP/1.1 200 OK
Content-Length: 277
Content-Type: application/json; charset=UTF-8

{
 "_shards": {
 "failed": 0, 
 "successful": 5, 
 "total": 5
 }, 
 "hits": {
 "hits": [
 {
 "_id": "foo", 
 "_index": "myapp", 
 "_score": 0.2169777, 
 "_source": {
 "firstName": "Foo", 
 "id": "foo", 
 "lastName": "Bar", 
 "password": "secret"
 }, 
 "_type": "users"
 }
 ], 
 "max_score": 0.2169777, 
 "total": 1
 }, 
 "timed_out": false, 
 "took": 4
}

And the entities can also be retrieved by id:

[sam@localhost srvr]$ http http://127.0.0.1:9200/myapp/users/sam
HTTP/1.1 200 OK
Content-Length: 161
Content-Type: application/json; charset=UTF-8
{
 "_id": "sam", 
 "_index": "myapp", 
 "_source": {
 "firstName": "Sami", 
 "id": "sam", 
 "lastName": "Siren", 
 "password": "secret"
 }, 
 "_type": "users", 
 "_version": 13, 
 "found": true
}

Excellent. Now let’s setup the shared filesystem that we use for backups. First we attach to the running container:

[sam@localhost srvr]$ docker exec -it 16f87173f7e9 bash

Next install ssh, sshfs, vim to the container

root@16f87173f7e9:/# apt-get update
root@16f87173f7e9:/# apt-get install openssh-client sshfs vim

Generate rsa keypair

root@16f87173f7e9:/# ssh-keygen

Deploy the generated public key to remote server

root@16f87173f7e9:/# ssh-copy-id es@x.x.x.x

Edit /etc/fstab to contain:

es@x.x.x.x:/home/es /backup fuse.sshfs noauto,x-systemd.automount,_netdev,users,idmap=user,IdentityFile=/root/.ssh/id_rsa,allow_other,reconnect 0 0

Create mount point for backup dir inside container and mount it

root@16f87173f7e9:/# mkdir /backup
root@16f87173f7e9:/# mount /backup
root@16f87173f7e9:/# chmod a+w /backup/

Now the docker container is fully up to speed and we can exit the container shell. Next we generate the repository container in ES so that it knows where to store backups.

[sam@localhost srvr]$ http PUT http://127.0.0.1:9200/_snapshot/backups type=fs settings:='{"location":"/backup/userbackup","compress":true}'
HTTP/1.1 200 OK
Content-Length: 21
Content-Type: application/json; charset=UTF-8

{
 "acknowledged": true
}

And create a backup from the current state:

[sam@localhost srvr]$ http PUT http://127.0.0.1:9200/_snapshot/backups/snapshot_1?wait_for_completion=true
HTTP/1.1 200 OK
Content-Length: 314
Content-Type: application/json; charset=UTF-8
{
 "snapshot": {
 "duration_in_millis": 16147, 
 "end_time": "2015-05-22T20:26:19.838Z", 
 "end_time_in_millis": 1432326379838, 
 "failures": [], 
 "indices": [
 "myapp"
 ], 
 "shards": {
 "failed": 0, 
 "successful": 5, 
 "total": 5
 }, 
 "snapshot": "snapshot_1", 
 "start_time": "2015-05-22T20:26:03.691Z", 
 "start_time_in_millis": 1432326363691, 
 "state": "SUCCESS"
 }
}
list all available snapshots:
http http://127.0.0.1:9200/_snapshot/backups/_all

Now that the index data is safely transferred to external box we can delete entity from out data storage

http DELETE http://localhost:9200/myapp/users/foo

…and verify it’s gone

[sam@localhost srvr]$ http get http://localhost:9200/myapp/users/foo
HTTP/1.1 404 Not Found
Content-Length: 60
Content-Type: application/json; charset=UTF-8
{
 "_id": "foo", 
 "_index": "myapp", 
 "_type": "users", 
 "found": false
}

The restore procedure starts with closing the index:

[sam@localhost srvr]$ http POST localhost:9200/myapp/_close
HTTP/1.1 200 OK
Content-Length: 21
Content-Type: application/json; charset=UTF-8

{
 "acknowledged": true
}

And then we can restore the data from the backup we took earlier

[sam@localhost srvr]$ http POST http://127.0.0.1:9200/_snapshot/backups/snapshot_1/_restore
HTTP/1.1 200 OK
Content-Length: 17
Content-Type: application/json; charset=UTF-8

{
 "accepted": true
}

And after opening the index again

[sam@localhost srvr]$ http POST localhost:9200/myapp/_open
HTTP/1.1 200 OK
Content-Length: 21
Content-Type: application/json; charset=UTF-8
{
 "acknowledged": true
}

We can verify that the data is indeed there

[sam@localhost srvr]$ http get http://localhost:9200/myapp/users/foo
HTTP/1.1 200 OK
Content-Length: 157
Content-Type: application/json; charset=UTF-8

{
 "_id": "foo", 
 "_index": "myapp", 
 "_source": {
 "firstName": "Foo", 
 "id": "foo", 
 "lastName": "Bar", 
 "password": "secret"
 }, 
 "_type": "users", 
 "_version": 1, 
 "found": true
}

As we can see the ES snapshot/restore functionality is pretty sleek and easy to use. I recommend you to read through the documentation as it contains lot more details and options than I have covered here.

Fedora Server 21 Surprise

Recently I had a server death and I got a chance to update the OS. Coming from the RedHat background the choice was obvious: Fedora 21. After doing some post install checks like verifying the listening tcp ports I noticed that there was a new piece of software that was installed by default – a browser operated admin UI called Cockpit.

Cockpit has the basic ad-hoc monitoring capabilities for checking CPU,Disk I/O, Memory and network traffic with basic graphing. It also gives you the possibility to check the system journal, modify disc and network configuration (bridges, vlan etc) and configure the system services.

For me the coolest feature of Cockpit was the possibility to configure and launch docker containers right from the browser.

cockpit

Sweet!

Team2 Oy mukana kehittämässä energiansäästöohjelmistoja

Team2 Oy on solminut toimeksiantosopimuksen Eniram Oy:n kanssa meriliikenteen polttoaineenkulutusta ja päästöjä säästävien ohjelmistoratkaisujen kehittämisestä. Toimeksianto alkaa syyskuun lopussa.

“Esineiden internet (IoT) mahdollistaa aivan uudenlaisten reaaliaikaisten päätöksentekoa tukevien järjestelmien kehittämisen. Usein analysointijärjestelmien pohjalla on yksi tai useampi avoimen lähdekoodin järjestelmä tai kehikko.”, kommentoi Team2 Oy:n toimitusjohtaja Jani Wilén.

LISÄTIETOA

Jani Wilén, Toimitusjohtaja
E-mail: etunimi.sukunimi(at)team2.fi

Eniram kehittää energiansäästöteknologiaa, joka vähentää meriliikenteen polttoaineenkulutusta ja päästöjä. Kokeneiden merenkulkijoiden ja teknologiaosaajien kehittämät ohjelmistoratkaisut ulottuvat risteily- ja rahtilaivoille asennettavista sovelluksista koko kaluston kattaviin analyysijärjestelmiin. Eniram auttaa varustamoita säästämään polttoainetta, vähentämään haitallisia päästöjä ja parantamaan kannattavuutta. Vuonna 2005 perustetun Eniram Oy:n pääkonttori ja tuotekehitys ovat Helsingissä. Lisätietoa: http://www.eniram.fi.

Error logging (server side) for AngularJS

When you have a production application written in AngularJS it sometimes happens that the javascript-code has bugs (gosh!).

Usually you don’t know about these (if your unit tests won’t catch them) until your users start to complain about things not working.

So – what to do?

  • Ask your users to “Open javascript console and cut and paste the log to email”?
  • Ask spesific guestions “Where error happens, what did you do? What data were you editing?” and test yourself
  • Log errors automatically to serverside and harvest user interface exceptions

Automatic error logging from AngularJS to server side

So how to catch unhandled javascript errors in AngularJS?

Add an excption handler to your application

//
// Enhance the application by adding custom exception handler
// - Catch unhandled errors
//
angular.module('app').provider(
        "$exceptionHandler",
        {
            $get: function(errorLogService) {
                console.log("$exceptionHandler.$get()");
                return(errorLogService);
            }
        }
);

In the previous code you are creating provider for

$exceptionHandler

– there you will return your exception handler

errorLogService

.

What the hell is the “errorLogService”?

It’s the factory that will provider your error logger: (example 2 in github)

//
// Factory to provider error log service
// - simple console logger
//
angular.module('app').factory(
        "errorLogService",
        function($log, $window) {

            $log.info("errorLogService()");

            function log(exception, cause) {
                $log.debug("errorLogService.log()");

                // Default behavior, log to browser console
                $log.error.apply($log, arguments);

                logErrorToServerSide(exception, cause);
            }

            function logErrorToServerSide(exception, cause) {
                $log.info("logErrorToServerSide()... NOT IMPLEMENTED");
            }

            // Return the logging function.
            return(log);            
        });

An example how to actually log errors to server side

The following is an example of how to log errors to server side.

I am using jQuery to POST the errors as JSON to server side handler. (see example 3 in github)

//
// Factory to provider error log service
// - log errors to server side
//
angular.module('app').factory(
        "errorLogService",
        function($log, $window) {

            $log.info("errorLogService()");

            function log(exception, cause) {
                $log.debug("errorLogService.log()");

                // Default behavior, log to browser console
                $log.error.apply($log, arguments);

                logErrorToServerSide(exception, cause);
            }

            function logErrorToServerSide(exception, cause) {
                $log.info("logErrorToServerSide()");

                // Read from configuration
                var serviceUrl = "http://localhost:3000/error";

                // Try to send stacktrace event to server
                try {
                    $log.debug("logging error to server side: serviceUrl = " + serviceUrl);

                    // Not sure how portable this actually is
                    var errorMessage = exception ? exception.toString() : "no exception";
                    var stackTrace = exception ? (exception.stack ? exception.stack.toString() : "no stack") : "no exception";
                    var browserInfo = {
                        navigatorAppName : navigator.appName,
                        navigatorUserAgent : navigator.userAgent
                    };

                    // This is the custom error content you send to server side
                    var data = angular.toJson({
                        errorUrl: $window.location.href,
                        errorMessage: errorMessage,
                        stackTrace: stackTrace,
                        cause: (cause || "no cause"),
                        browserInfo: browserInfo
                    });

                    $log.debug("logging error to server side...", data);

                    // Log the JavaScript error to the server.
                    $.ajax({
                        type: "POST",
                        url: serviceUrl,
                        contentType: "application/json",
                        xhrFields: {
                            withCredentials: true
                        },
                        data: data
                    });

                } catch (loggingError) {
                    // For Developers - log the logging-failure.
                    $log.warn("Error logging to server side failed");
                    $log.log(loggingError);
                }
            }

            // And return the logging function
            return(log);            
        });

If you really want to see the sever side output, you can start the included simple nodejs / express web server that will log the error to console.

    cd error_logging_server
    npm install
    node index.js

Example in github

https://github.com/Team2Oy/blog-angular-exception-logging

Other things to consider

If you get too much and same errors from UI:

  • Inspect and identify same errors / locations in client javascript and log only once
  • Do the error aggregation on server side
  • Write unit tests 🙂

Standard disclaimer

Use freely at your own risk.