Products

Discover probabilistic AI and integrate it in AI systems.

Not sure where to start?

You have an exciting AI project but do not know where to start? We can help you to develop the AI algorithms for your applications. It does not matter if you already have the specifications of your AI system or not, we can start working together at any step of the process.

On demand acceleration

AI Algorithms are executed very well on conventional hardware. However, in some cases, using a dedicated hardware architecture makes sense and increases the performance of the system regarding latency or energy consumption issues. At HawAI.tech, we have the expertise on implementing algorithms on hardware such as FPGAs (Field Programmable Gate Arrays) or ACAPs (Adaptive Compute Acceleration Platform) such as Xilinx Versal (TM). Let’s work together on increasing the performance of your deployed AI.

Edge inference architecture

Our first generic architecture will allow probabilistic AI to be used for edge AI inference tasks in various use-cases. It will be implemented on a Xilinx Versal (TM) chip. It will provide sensing and analyzing of its environment and taking actions if needed. Thanks to its optimized implementation, it will reduce the energy efficiency of a factor 40 compared to existing architectures at a constant power consumption of 10W. Contact us for more information and a closed-beta access of this architecture.

Edge inference & learning chip

Our first chip will integrate our architecture and provide an increase of performance by a factor of 500 of the samples/watt compared to the competition. It will target edge AI applications and provide an improvement for the tasks of sensing, analyzing and acting. It’s main advantage will be the capacity of learning and training at the edge while being deployed.

If you want your AI models to become more explainable and more energy efficient, feel free to contact us.

Contact