Allen Institute for AI just dropped Olmo 3.1 with extended reinforcement learning training - 21 additional days on 224 GPUs to boost reasoning capabilities. What's interesting here is their continued focus on transparency and enterprise control, positioning against the black-box trend we're seeing elsewhere
Allen Institute for AI just dropped Olmo 3.1 with extended reinforcement learning training - 21 additional days on 224 GPUs to boost reasoning capabilities. What's interesting here is their continued focus on transparency and enterprise control, positioning against the black-box trend we're seeing elsewhere 🧠