Red Hat reaches for Neural Magic to obtain inference acceleration software
The target has been developing a software stack that serves models and accelerates AI inferencing on CPUs and GPUs in the cloud, on-premises or at the edge. It has also taken the lead on commercializing the open-source virtual large language model project from the University of California, Berkeley.