Vexor: how configuration files are being prepared

Preparation of configuration files by the Vexor team
13 June 2017   1765

Over the past couple of months, 100 projects have joined Vexor. Very different projects with unique settings. During this time, team learned a lot about creating of configuration files. In the process, developers revised their original approach and want to share innovations with you.

Vexor

Cloud Continuous Integration service. Unlimited parallelism
 

Continuous integration

Practice of merging all developer working copies to a shared mainline several times a day

Remake of launch scripts generation

The configuration file vexor.yml is used in order to understand what tasks need to be done to prepare the environment and run tests in VexorCI. This file is located in the root directory or generated automatically. It specifies the commands that will be executed to prepare the test environment and run the tests.

In the past, in order to convert vexor.yml directly into the commands that the worker performs when running the tests, we:

  1. The configuration file of built was obtained;
  2. Sort it out;
  3. Generated a large bash script;
  4. Sent the script to the worker.

There were a lot of problems with this scheme:

  • It was very difficult to do non-trivial tasks, for example, patching config / secrets.yml in a Rails application.
  • Bash is not the most successful language for writing large amount of code, there was a lot of tricky bugs.
  • Unclear error messages for the user.

Team decided to change it. Now we:

  1. Get the configuration of the build in yaml format;
  2. Convert the intermediate representation to a file, also in yaml format. The very structure of the file is made by analogy with ansible.
  3. This file is sent to the worker for execution.

Renouncement to generate a giant shell script allowed to manage the deployment environment more flexible, show users understandable error messages and forget about hard-to-solve bugs in the shell.

Change of keys assignment in the configuration file

To start using VexorCI, writing a configuration file is completely optional. Unfortunately, in the old version there were situations when the tasks generated by default collided with tasks specified by the user in the configuration file.

In order to avoid such conflicts in the future, all the settings related to the databases that used to be in the 'before_script' key should now be in a separate key - 'database'.

# was
before_script:
- rake db:create
- rake db:migrate
     
# now
database:
- rake db:create
- rake db:migrate

Using a separate key for databases makes the configuration easier and more understandable.

CCMenu is supported

CCMenu is a popular application that displays build statuses in the toolbar and allows you to always be aware of what is happening on the CI server. Now projects from VexorCI can easily be added to the CCMenu.

CCMenu URL at Vexor "Settings
CCMenu URL at Vexor "Settings

Go to "Settings" of your project and copy "CCMenu URL"

Feed URL tab
Feed URL tab in CCMenu

Paste this URL to a "Feed URL" field.

Final result
Final result

It works!

Now it's much easier to monitor the build status.

Use Vexor and get rid of any restrictions and unfair price. Connect and receive $ 5 on your account to check out Vexor.

Vexor.io

Which CI do you use?

In software engineering, continuous integration (CI) is the practice of merging all developer working copies to a shared mainline several times a day. There are a lot of different continious integration solutions with strong and weak sides.
Take part in the survey of our portal. Which Continuous integration system do you use?

Vexor
37% (11 votes)
CircleCI
23% (7 votes)
Jenkins
17% (5 votes)
TravicCI
10% (3 votes)
GitlabCI
7% (2 votes)
Teamcity
3% (1 vote)
Atlassian Bamboo
3% (1 vote)
Apache Maven
0% (0 votes)
CodeShip
0% (0 votes)
Semaphore
0% (0 votes)
Total votes: 30

Vexor at HighLoad++ 2017

Alexandr Kirillov reported about how to build a cluster to calculate thousands of high-CPU / high-MEM tasks at one of the biggest Russian IT conferences
12 December 2017   4221

The HighLoad++ is professional conference for developers of high-load systems is the key professional event for everyone involved in the creation of large, heavily-frequented, and complex projects.

Main purpose of the event is to exchange knowledge and experience among leading developers of high-performance systems, which support millions of users simultaneously.

Agenda consists of all crucial web development aspects, such as:

  • Large scale architectures,
  • databases and storage systems,
  • system administration,
  • load testing,
  • project maintenance, etc.

This year the conference program will be dazzled with current trends: IoTBlockchainNeural networksArtificial Intelligence, as well as Architecture & Front-end performance.

The 11th HighLoad++ conference took place on the 7th and 8th of November 2017. 

  • 66% of participants work in large companies (of 30+ employees), 
  • 60% earn above the market, 
  • 55% hold leadership positions and have subordinates. 
  • 9% of conference visitors work as technical directors,
  • 12% work as heads of technical departments, and 29% work as lead developers and team leads.

Alexandr Kirillov, CTO at Evrone, had a speech at HighLoad++ 2017 "How to build a cluster to calculate thousands of high-CPU / high-MEM tasks and not go broke"

Alexandr Kirillov at HighLoad++ 2017
Alexandr Kirillov at HighLoad++ 2017
Alexandr Kirillov at HighLoad++ 2017
Alexandr Kirillov at HighLoad++ 2017
Alexandr Kirillov at HighLoad++ 2017
Alexandr Kirillov at HighLoad++ 2017
 

Our project is a cloud-based CI-service, where users run tests of developed projects.
This year the system of auto purchase of our project purchased 37218 machines (Amazon Instances). This allowed us to process 189,488 "tasks" (test runs) of our customers.
 

Tests are always resource-intensive tasks with the maximum consumption of processor capacities and memory. We can not predict how many parallel computations and at what point in time it will be. Before us was the task of building the architecture of the system, which can very quickly increase, as well as rapidly reduce the power of the cluster.
 

All this was complicated by the fact that the resource-intensive calculations did not allow us to use the classic tools AWS or GoogleComputeEngine. We decided to write our own system of automatic scaling, taking into account the requirements of our subject area.
 

Alexandr Kirillov
CTO, Evrone

At his report, Alexandr told about how they designed and built the architecture of the service, which is the system of automatic procurement of machines.

Additionally, he told more about the main architectural blocks of projects that solve similar problems.