Neural Network to Forge Fingerprints

DeepMasterPrints can be used as a key for hacking biometric identification systems
16 November 2018   458

Researchers at New York University have developed a generative-adversary network for prototyping fingerprints. Images of these prints, called DeepMasterPrints, can be used as a key for hacking biometric identification systems.

The principle of creating DeepMasterPrints is to use two properties of fingerprints and biometric systems.

Many scanners do not read the entire print. They process a part of it, and then compare it with the exact same part of the print from the database. Thus, a fake print should correspond only to a part of the original one.

Secondly, the researchers noted that some features of prints are repeated. This means that an artificial footprint that contains a set of common features will correspond to several genuine footprints at once.

DeepMasterPrints
DeepMasterPrints

The study showed that at a system tolerance level of 0.1%, artificial fingerprints can forge up to 23% of all fingerprints from the database. An error of 1% allows neural networks to fake up to 77% of prints.

Experts compare the use of DeepMasterPrints with a brute-force attack, when an attacker sorts through all the passwords that may be appropriate in this case.

OpenAI to Create Fake News Creating Algorithm

On the basis of one or two phrases that set the theme, it is able to “write” a fairly plausible story
18 February 2019   165

The GPT-2 algorithm, created by OpenAI for working with language and texts, turned out to be a master in creating fake news. On the basis of one or two phrases that set the theme, it is able to “compose” a fairly plausible story. For example:

  • an article about scientists who have found a herd of unicorns in the Andes;
  • news about pop star Miley Cyrus caught on shoplifting;
  • artistic text about Legolas and Gimli attacking the orcs;
  • an essay on how waste recycling harms the economy, nature, and human health.

The developers did not publish the source code of the model entirely, fearing abuse by unscrupulous users. For fellow researchers, they posted on GitHub a simplified version of the algorithm and gave a link to the preprint of the scientific article. The overall results are published on the OpenAI blog.

GPT-2 is a general purpose algorithm. The developers taught it to answer questions, “understand” the logic of a text, a sentence, finish building phrases. In this case, the algorithm worked worse than the model of a specific purpose. Researchers suggest that the indicators can be improved by expanding the training datasets and choosing computers more efficiently.