Vexor for Android

Sophisticated Vexor configuration for Android application assembling
25 May 2017   1376
Vexor

Cloud Continuous Integration service.

Greetings, everyone! Today I’m going to show you how to create a configuration for Vexor CI in order to automate the assembling of Android applications.

Let me start from the configuration itself:

language: java
cache:
  directories:
  - ~/android-sdk-linux
  - ~/.gradle
before_install:
- sudo apt-get --yes -qq install lib32stdc++6 lib32z1
install:
- test -f ~/android-sdk-linux/SDK\ Readme.txt || (
    wget -O - http://dl.google.com/android/android-sdk_r24.0.2-linux.tgz | tar xz -C ~ &&
    echo y | ~/android-sdk-linux/tools/android update sdk --no-ui --all --filter platform-tools,build-tools-21.1.2,android-21,extra-android-m2repository
  )
script:
- ANDROID_HOME=~/android-sdk-linux ./gradlew --quiet assembleRelease

Shown Android configuration is unavailable in Vexor Ci at the moment, that’s why in “project settings” I’ve chosen “scala” as most common option, and mentioned that target language as java.

Next, interesting thing begins - cache. Vexor CI knows how to put the specified paths to the archive after the “Job” has been completed and deploy this archive at the next assembly. We are going to use this feature in order to avoid Android SDK and Gradle download every time anew; this will help us to save assembly time.

The lib32stdc++6 and lib32z1 packages are needed to run appt utility from SDK. Long line in “install” checks for cached  Android SDK, and if it’s missing, SDK archive is beeing downloaded, unpacked and updates with packages installation. You can find the list of all available packages by this:

./android-sdk-linux/tools/android list sdk --extended -all

And, in the end, script just starts assembling in Gradle.

Author - Pavel Perestoronin

 

Which CI do you use?

In software engineering, continuous integration (CI) is the practice of merging all developer working copies to a shared mainline several times a day. There are a lot of different continious integration solutions with strong and weak sides.
Take part in the survey of our portal. Which Continuous integration system do you use?

Vexor at HighLoad++ 2017

Alexandr Kirillov reported about how to build a cluster to calculate thousands of high-CPU / high-MEM tasks at one of the biggest Russian IT conferences
12 December 2017   3746

The HighLoad++ is professional conference for developers of high-load systems is the key professional event for everyone involved in the creation of large, heavily-frequented, and complex projects.

Main purpose of the event is to exchange knowledge and experience among leading developers of high-performance systems, which support millions of users simultaneously.

Agenda consists of all crucial web development aspects, such as:

  • Large scale architectures,
  • databases and storage systems,
  • system administration,
  • load testing,
  • project maintenance, etc.

This year the conference program will be dazzled with current trends: IoTBlockchainNeural networksArtificial Intelligence, as well as Architecture & Front-end performance.

The 11th HighLoad++ conference took place on the 7th and 8th of November 2017. 

  • 66% of participants work in large companies (of 30+ employees), 
  • 60% earn above the market, 
  • 55% hold leadership positions and have subordinates. 
  • 9% of conference visitors work as technical directors,
  • 12% work as heads of technical departments, and 29% work as lead developers and team leads.

Alexandr Kirillov, CTO at Evrone, had a speech at HighLoad++ 2017 "How to build a cluster to calculate thousands of high-CPU / high-MEM tasks and not go broke"

Alexandr Kirillov at HighLoad++ 2017
Alexandr Kirillov at HighLoad++ 2017
Alexandr Kirillov at HighLoad++ 2017
Alexandr Kirillov at HighLoad++ 2017
Alexandr Kirillov at HighLoad++ 2017
Alexandr Kirillov at HighLoad++ 2017
 

Our project is a cloud-based CI-service, where users run tests of developed projects.
This year the system of auto purchase of our project purchased 37218 machines (Amazon Instances). This allowed us to process 189,488 "tasks" (test runs) of our customers.
 

Tests are always resource-intensive tasks with the maximum consumption of processor capacities and memory. We can not predict how many parallel computations and at what point in time it will be. Before us was the task of building the architecture of the system, which can very quickly increase, as well as rapidly reduce the power of the cluster.
 

All this was complicated by the fact that the resource-intensive calculations did not allow us to use the classic tools AWS or GoogleComputeEngine. We decided to write our own system of automatic scaling, taking into account the requirements of our subject area.
 

Alexandr Kirillov
CTO, Evrone

At his report, Alexandr told about how they designed and built the architecture of the service, which is the system of automatic procurement of machines.

Additionally, he told more about the main architectural blocks of projects that solve similar problems.