News
Current Projects
Research and development of a deep packet inspection infrastructure.
Start of project – 2015. Customer - RFBR.The importance of network traffic analysis is constantly increasing because of novel network technologies being developed immediately hitting the market, thus increasing data volume (including personal and sensitive information) transmitted over network by innumerable network applications many of which implement closed application level protocols. Available network analysis tools typically don’t offer generic facilities to inspect application protocols, usually only widespread protocols are supported.
Research and development of methods to search for reusable code fragments (clones).
Start of project – 2015. Customer - RFBR.Reuse of code fragments, often using in software development. At the level of the source code, it can be part of a program that performs a similar role, but copied with slight modifications. On the binary level it may be object files from libraries are included on the linking stage in the several executable files of the program.
Many hardware-based techniques have been developed for support of increasing data flows: high-speed network channels and memory buses, high frequency CPUs, hard disks with high data density and low access time. However, numerous unsolved problems remain on the software side dealing with processing, analyzing and storing data. This software must use hardware resources efficiently and also satisfy rigid requirements: support batch processing of huge data volumes with high throughput, provide reliable functioning on unreliable hardware, allow for good scaling and efficient random data access. This project is aimed at creating a framework for data acquirement, filtering, analysis and storage in real time on high-speed network channels. This framework will allow automation of a wide range of tasks related to high-speed data flows: classifying traffic, ensuring network security, analyzing social networks, and forecasting using big data.
Static analysis of program source code for program understanding.
Start of project – 2014. Customer - RFBR.The project goal is to create methods for solving program understanding problems that arise during the program lifecycle. The basic information for such methods is program structure, that is, program entities, relations between them, and their metrics. The methods will be used in the task of easing the back/forward porting of code changes between different versions of the given program.
The idea of the project is to build a solution for processing Big Data collected from numerical simulation of continuum mechanics problems.
Optimization algorithms for placement of virtual machines in the cloud computing model SaaS.
Start of project – 2014.The main aim of the project is to create software tools that allow more efficient use of computing resources in the cloud. The results are applied in the system UniHUB for hosting applications on virtual machines running OpenStack.
Research and development of software obfuscation methods.
Start of project – 2014. Customer - RFBR.To protect the binary code from analysis are used by many different methods, one of them - obfuscation transformations. Such transformations are usually made with automatic obfuscators, which takes as input the source code or binary file, and output provide an obfuscated executable program.
Developing tools for programming heterogeneous exaflop computing systems.
Start of project – 2013. Customer - RFBR.The project goal is to create system toolchain software that improves programmer's productivity on distributed heterogeneous systems (typically with nodes having a couple of multicore CPUs and accelerator(s) like GPUs). We will be researching on tools for finding program bottlenecks, critical errors (including multithreaded performance), and trying new programming standards. We will also be improving problem specific parallel algorithms in the sparse matrix libraries and OpenFOAM framework for CFD problems.
Static analyzer for the C# language: SharpChecker
SharpChecker is the platform for static analysis of C# programs, aimed at finding bugs. The tool contains both a code analyzer engine as well as components ready for integration into industrial development processes. SharpChecker can be used not only by programmers to fix errors in the project, but also by managers as another dynamic metric to evaluate the quality of the product.
ISP Obfuscator. Code obfuscation to protect against vulnerability exploitation
ISP Obfuscator is based on long-term research that ISP RAS started as early as 2002. The obfuscation technology grew up from basic research to industrial deployment. It is covered by dozens of publications and two PhD theses during these years. ISP Obfuscator integrates with compilers to make those transformations transparent for developers. At the moment two compiler infrastructures are supported: LLVM and GCC.
ISP RAS has developed Svace static analysis tool that satisfies all requirements for a production quality analyzer. Svace supports C/C++, Java, and C# programming languages (C# can be also shipped separately as it is implemented as a standalone tool), and it runs on Linux and Windows. Svace analyzes programs that can be built on Intel x86/x86-64 Linux/Windows, ARM/ARM64 architectures. Popular C/C++ compilers for Linux and Windows are supported as well as a range of compilers for embedded systems.
Binary Code Analysis Platform Based on QEMU Emulator
QEMU is a full-system multi-target open source emulator. It is widely used for software cross-development. Many large companies (e.g., Google, Samsung, Oracle) prototype and emulate their hardware platforms and peripheral devices on QEMU. QEMU 2.9 emulates 20 different hardware platform families, including x86, PowerPC, Sparc, MIPS, ARM.
Nowadays, the task of network traffic analysis is of increasing relevance: the reasons are improvement and deployment of new network technologies (VoIP, P2P, streaming video) and emergence of numerous application level protocols used by new network applications. Offline or online analysis is employed, depending on particular analysis system and the problem being solved.
ISP Crusher: a dynamic analysis toolset
ISP Crusher is a toolset that combines various dynamic analysis approaches. It includes ISP Fuzzer, a fuzzing tool, and Sydr, an automatic test generation tool for complex pro-grams. Two other ISP RAS analyzers, BinSide and Casr, will be included in Crusher within the next two years. Crusher allows organizing a development process that is fully compliant with GOST R 56939-2016 and other regulatory requirements of FSTEC of Russia.
Casr: crash analysis and severity reporting tool
Casr creates automatic reports for crashes happened during program testing or deployment. The tool works by analyz-ing Linux coredump files. The resulting reports contain the crash’s severity and additional data that is helpful for pin-pointing the error cause.
BinSide: a binary code static analysis tool
Software developers often face a problem of incorporating complex computations, data encryption and compression algorithms, and similar common notions into their code. This is typically done by using standard libraries specializing in a group of tasks; these libraries are often distributed in binary code only. On the other hand, software maintenance is gradually becoming more and more important within the development cycle; software maintenance incorporates the task of updating both its code and external libraries. External libraries and auxiliary programs, distributed in binary form, need to conform to quality and security standards.
Finished Projects
The idea of the project was to create a technological advance in area of direct computation modeling of turbulence and large eddy method as well as to find ways for effective supercomputer usage in industrial applications. A software implementing algorithms for computation modeling of gas- and hydrodynamics numerical simulation in industrial applications based on OpenFOAM free software package was developed under the project. On the base of this software a method of using the supercomputer for numerical modeling of gas- and hydrodynamics problems in industrial applications was developed.
Most of developed tools for analysis for various libraries (MPI, OpenMP) and languages for parallel programming use low level approaches to analyze the performance of parallel applications. There are a lot of profiling tools and trace visualizers which produce tables, graphs with various statistics of executed program. In most cases developer has to manually look for bottlenecks and opportunities for performance improvement in the produced statistics and graphs. The amount of information developer has to handle manually, increase dramatically with number of cores, number of processes and size of problem in application. Therefore new methods of performance analysis fully or partially handling output information will be more beneficial.
The idea of the project is to create a technological advance for development of effective method of unsteady near field turbulent flows simulation with accuracy required by engineering applications and a technological advance in area of software development for calculation of near field turbulent flows acoustic fields on hybrid architecture supercomputers.
"Virtual supercomputer" software was developed in this project. The software complex is developed in free software model and is based on open source code components.
The project is aimed at development of a software toolset for automated vulnerability detection and exploit construction. The toolset is designed to reveal vulnerabilities in binary code of programs that operate over network.
Research and development of format recovery methods.
Start of project – 2011. End of project - 2013. Customer - RFBR.One of the widespread problems in binary code analysis is recovery of structure of incoming network packets or files read by a program. In case of protected binary code the difficulty of manual format recovery becomes inadmissibly high. This project proposes to create an automated format recovery system which does not require specific knowledge about the target system software from its user. This system will increase work efficiency and recovery accuracy.
High-level models of parallel computations and runtime libraries.
Start of project – 2011. End of project - 2012. Customer - The Ministry of Education and Science.A program model for distributed heterogeneous computation systems, with a single node consisting of a multicore general purpose computer (host-machine) and one or several PLD. Proposed model for programming heterogeneous systems combines best approaches for creating high-level programming models and approaches utilizing accelerators capabilities with the help of runtime libraries with maximum efficiency. At the high level a programmer can describe a data-parallel algorithm, which can be parameterized for certain heterogeneous node.
Development of web research center for software analysis.
Start of project – 2011. End of project - 2012. Customer - The Ministry of Education and Science.A prototype of web-center for program analysis was developed under the project on the base of the UniHUB technological platform software components, developed in the ISP RAS, the "University cluster" program computation infrastructure and Avalanche open program analysis package.
The project was aimed at the creation of an experimental platform for numerical simulation on the top of the OpenFOAM library for heterogeneous computer systems with graphical processing units transferring the most resource-intensive computations to the graphical processing unit using CUDA technology and managing central processing unit and graphical processing unit interaction.
Optimizing programs on the target user machine for the target architecture and user's behavior.
Start of project – 2011. End of project - 2013. Customer - RFBR.One has to harness dynamic and adaptive recompilation methods when designing the system for general-purpose languages compilation which takes into account the specific factors of target hardware and the most likely way of usage. It is favorable to research those methods in the LLVM infrastructure environment.
Providing access to distributed resources as a web-service from infrastructure to application level.
Start of project – 2010. End of project - 2012. Customer - The Ministry of Education and Science.During works on the project, problems on research of access methods to high-performance resources and on development of an experimental sample of hardware-software platform, providing access to high-performance resources as Web-services were solved.