SqueezeBits builds compressed neural networks for faster, lighter AI applications. They profile models to identify optimization opportunities, then apply quantization and other techniques to reduce size and latency for hardware-optimized deployment on devices from servers to edges. Their no-code toolkit and services help customers migrate models across environments.