create your own web page for free

Consultancy Stories

Examples of  our customer success efforts 

  1. Story 1 - Helprack is developing an LLVM Compiler Toolchain for an open source programming language and platform for cloud native application. This customer had performance issues with the java based compiler. Learn how Helprack is working with the customer. Read More...
  2. Story 2 - Helprack team developed a comprehensive solution using GCC for a VLIW processor with prediction, SIMD operations, complex arithmetic and cordic functions. Read More...
  3. Story 3 - Helprack is working with a customer to implement vectorization in their GCC based compiler. This customer develops licensable processor IPs for media processing. Learn how Helprack is developing a vectorizing compiler for their variable length vector processing units. Read More...
  4. Story 4 - Helprack team developed one analysis pass and two optimization passes in the Link Time Optimizer(LTO) of GCC for a client, who wanted to upstream these passes to the GCC community. Learn how Helprack worked with this client to develop these passes. Read More...
  5. Story 5 - The Helprack team worked over a period of 10 months with this Semiconductor major to fix some performance issues in the LLVM compiler (Aarch64) with the GeekBench benchmark and a few other functional issues. The fixes were upstreamed to the LLVM community. Read More...
  6. Story 6 - The Helprack team consisting 4 engineers worked over a period of 18 months with this Semiconductor major alongside their existing Compiler team, to analyze performance bottlenecks in the Open64 Compiler (x86) and implement C++AMP features in an LLVM based Heterogeneous Compiler. Read More...
  7. Story 7 - Helprack has been researching the compilation of AI/ML models written in TensorFlow for execution on custom compute fabric for Edge Computing (Inferencing) for a stealth mode startup in the AI/ML chip space. Read More...
  8. Story 8- An engineering organization of a Fortune 500 security company, felt the need for a unified real time user-friendly dashboard of engineering data with a 360 degree view of the deliverables.  Read More...

Story1 Details  

The customer, developing an open source programming language, already had a compiler for the language written in Java that used the Java runtime. However they noticed performance issues associated with Java and hence  wanted a native compiler too. The Helprack team is currently engaged in developing a native compiler using the LLVM framework and Rust Runtime libraries. 

We love to collaborate with you. Let's connect.

Story2 Details  

The Helprack team developed a compiler for a series of VLIW processors by porting the GNU tools to improve productivity for the control code on these processors. The developer tools solution included tools like the GCC compiler without instruction packing, Binutils with CGEN based assembler/disassembler and a port of GDB with remote debugging over serial line. This enabled rapid development of control path code. Customer requirements included several special instructions that did not have matching semantics in C code; for such cases, many new intrinsics were created in the compiler. The resulting tool chain worked on both Linux and Windows platforms.

We can build a similar solution for you. Let's talk.

Story3 Details 

The customer has multiple processor cores with vector units of different sizes. These cores can operate on an entire vector register or any arbitrary parts of it based on a register mask. Besides, they have hundreds of instructions that do not have a direct equivalence in the C/C++ programming paradigm, which need to be implemented as intrinsics. These intrinsics also have a vector form. The customer wanted regular arithmetic and logical operations and these intrinsics to be vectorized for their cores. In addition, they wanted the tail of the loop to be converted into a linear code using the masking feature of the vector registers. The other features that their vector processor supported are striding and scatter/gather. Helprack augmented the vectorizer in the GCC to implement the tail loop conversion to linear code, and also implemented an innovative mechanism to implement the intrinsics and their vectorized forms, such that the same scalar intrinsic can be called for any processor and the compiler generates vectorized code with the vector size suitable for that processor. Helprack also implemented the scatter/gather and striding operations in GCC for this family of processors.  

If you have a vector core, we could help you exploit the fullest potential of your core using either the GCC or LLVM framework. Let Helprack team assist you.


Story4 Details 

This customer wanted to implement a structure layout reorganization pass and a whole program global constant propagation pass, which could have helped improve the performance of a few SPEC benchmarks, notable amongst them being the mcf benchmark. In order to perform the structure reorganization, an Inter-Procedural Analysis (IPA) pass was necessary to determine if the structure in question would escape the compilation module. Thereafter, the customer wanted to delete any unused fields and reorder the remaining fields of the structure so as to minimize padding within the structure for all non-escaping structures. The customer also wanted to implement an inter-procedural analysis (IPA) pass to identify all cases of possible constant propagation where there was a single write operation of a constant followed by multiple read paths; then implement propagation of the constant across all those read paths. This was expected to be accomplished at a WHOle PRogram (WHOPR) level in the Link Time Optimizer (LTO). Helprack implemented these three passes in the LTO framework of GCC within a short period of time (3 months) and ensured that all the SPEC benchmarks were working flawlessly. The GCC community guidelines were adhered to in the code while ensuring that the patches were accepted by the community.

We can build similar optimization passes for you in GCC or LLVM, whether for open source contribution or your internal consumption. Let's talk.

Story5 Details 

There were two performance issues in the LLVM compiler vis-a-vis the GeekBench benchmark that the customer had already identified. The HelpRack team worked with this customer to implement the fixes for these performance issues and upstreamed the fixes to the LLVM Community after due community review. These fixes improved the GeekBench score for that platform by 1.5%. The customer had also identified about a dozen functional issues in the LLVM. HelpRack fixed 9 out of them, the remaining being non-reproducible. Some of the fixes were upstreamed to the LLVM community, while the rest were taken up by the customer for follow-up in the community.

We can help you fix performance and functionality issues with your existing compiler and also take the ownership for continuous engineering of your legacy compilers. Let's talk.

Story6 Detail 

This customer was supporting the Open64 compiler for the scientific computing community. The Helprack team was tasked with analyzing the SPEC benchmarks for this compiler and identifying the performance bottlenecks in it, proposing optimizations for overcoming those bottlenecks, implementing hand-coded prototypes of the proposed optimizations and demonstrating the potential performance improvements of the proposed optimizations. The team analyzed 2 benchmarks, viz., bzip2 and cactus, proposed vectorization of a hot loop in the first and a loop fission in the second, demonstrated potential performance gains of 2% in the first and about 1.5% in the second.

Additionally, the team was tasked with implementing certain features of the C++AMP in their LLVM based Heterogeneous Compiler, most notably the parallel_for_each, array_view and restrict keywords. The team made significant contributions to the codebase in each of these three features. In addition, the team fixed many functional issues in the same compiler.

We can help you identify performance and functionality issues with your existing compiler and provide fixes for the same. Let's talk.

Story7 Detail 

This customer has developed an AI/ML chip that has a compute fabric akin to a GPU and can compute on the Edge. The chip is supposed to be deployed for inferencing use cases. HelpRack is exploring the possibility of compiling CNN and RNN models written in TensorFlow to this custom architecture via the ONNX -> MLIR -> LLVM IR route.

If you are building an AI/ML chip, we could help you with building a compiler for the same. Let's connect.


Story8 Details 

Helprack's data management service team delivered a comprehensive engineering dashboard with metrics and insights by integrating jira, slack channels, product specs, bug tracking and  source control. Establishing a commonality across data elements and allowing different views into it, it enabled transperancy and better control of the overall engineering process, enabled quicker resonse to issue.

Interested? Let's talk