Jason Knight is Co-founder and VP of ML at OctoAI – Interview Sequence

Jason Knight is Co-founder and VP of ML at OctoAI – Interview Sequence
Jason Knight is Co-founder and VP of ML at OctoAI – Interview Sequence


Jason Knight is Co-founder and Vice President of Machine Studying at OctoAI, the platform delivers an entire stack for app builders to run, tune, and scale their AI purposes within the cloud or on-premises.

OctoAI was spun out of the College of Washington by the unique creators of Apache TVM, an open supply stack for ML portability and efficiency. TVM permits ML fashions to run effectively on any {hardware} backend, and has rapidly turn out to be a key a part of the structure of well-liked client units like Amazon Alexa.

Are you able to share the inspiration behind founding OctoAI and the core drawback you aimed to resolve?

AI has historically been a posh discipline accessible solely to these snug with the arithmetic and high-performance computing required to make one thing with it. However AI unlocks the last word computing interfaces, that of textual content, voice, and imagery programmed by examples and suggestions, and brings the complete energy of computing to everybody on Earth. Earlier than AI, solely programmers had been in a position to get computer systems to do what they wished by writing arcane programming language texts.

OctoAI was created to speed up our path to that actuality in order that extra individuals can use and profit from AI. And other people, in flip, can use AI to create but extra advantages by accelerating the sciences, drugs, artwork, and extra.

Reflecting in your expertise at Intel, how did your earlier roles put together you for co-founding and main the event at OctoAI?

Intel and the AI {hardware} and biotech startups earlier than it gave me the attitude to see how exhausting AI is for even probably the most refined of expertise corporations, and but how invaluable it may be to those that have discovered methods to use it. And seeing that the hole between these benefiting from AI in comparison with those that aren’t but is primarily one in every of infrastructure, compute, and finest practices—not magic.

What differentiates OctoStack from different AI deployment options obtainable available in the market as we speak?

OctoStack is the business’s first full expertise stack designed particularly for serving generative AI fashions wherever. It gives a turnkey manufacturing platform that gives extremely optimized inference, mannequin customization, and asset administration at an enterprise scale.

OctoStack permits organizations to realize AI autonomy by working any mannequin of their most well-liked surroundings with full management over knowledge, fashions, and {hardware}. It additionally delivers unmatched efficiency and value effectivity, with financial savings of as much as 12X in comparison with different options like GPT-4.

Are you able to clarify the benefits of deploying AI fashions in a non-public surroundings utilizing OctoStack?

Models today are ubiquitous, however assembling the appropriate infrastructure to run these fashions and apply them with your individual knowledge is the place the business-value flywheel really begins to spin. Utilizing these fashions in your most delicate knowledge, after which turning that into insights, higher immediate engineering, RAG pipelines, and fine-tuning is the place you will get probably the most worth out of generative AI. Nevertheless it’s nonetheless tough for all however probably the most refined corporations to do that alone, which is the place a turnkey answer like OctoStack can speed up you and convey the very best practices collectively in a single place to your practitioners.

Deploying AI fashions in a non-public surroundings utilizing OctoStack gives a number of benefits, together with enhanced safety and management over knowledge and fashions. Prospects can run generative AI purposes inside their very own VPCs or on-premises, making certain that their knowledge stays safe and inside their chosen environments. This method additionally supplies companies with the flexibleness to run any mannequin, be it open-source, customized, or proprietary, whereas benefiting from value reductions and efficiency enhancements.

What challenges did you face in optimizing OctoStack to assist a variety of {hardware}, and the way had been these challenges overcome?

Optimizing OctoStack to assist a variety of {hardware} concerned making certain compatibility and efficiency throughout varied units, reminiscent of NVIDIA and AMD GPUs and AWS Inferentia. OctoAI overcame these challenges by leveraging its deep AI techniques experience, developed by means of years of analysis and growth, to create a platform that repeatedly updates and helps extra {hardware} varieties, GenAI use circumstances, and finest practices. This permits OctoAI to ship market-leading efficiency and value effectivity.

Moreover, getting the most recent capabilities in generative AI, reminiscent of multi-modality, operate calling, strict JSON schema following, environment friendly fine-tune internet hosting, and extra into the palms of your inside builders will speed up your AI takeoff level.

OctoAI has a wealthy historical past of leveraging Apache TVM. How has this framework influenced your platform’s capabilities?

We created Apache TVM to make it simple for stylish builders to jot down environment friendly AI libraries for GPUs and accelerators extra simply. We did this as a result of getting probably the most efficiency from GPU and accelerator {hardware} was essential for AI inference then as it’s now.

We’ve since leveraged that very same mindset and experience for your complete Gen AI serving stack to ship automation for a broader set of builders.

Are you able to focus on any important efficiency enhancements that OctoStack gives, such because the 10x efficiency enhance in large-scale deployments?

OctoStack gives important efficiency enhancements, together with as much as 12X financial savings in comparison with different fashions like GPT-4 with out sacrificing velocity or high quality. It additionally supplies 4X higher GPU utilization and a 50 % discount in operational prices, enabling organizations to run large-scale deployments effectively and cost-effectively.

Are you able to share some notable use circumstances the place OctoStack has considerably improved AI deployment to your shoppers?

A notable use case is Apate.ai, a worldwide service combating phone scams utilizing generative conversational AI. Apate.ai leveraged OctoStack to effectively run their suite of language fashions throughout a number of geographies, benefiting from OctoStack’s flexibility, scale, and safety. This deployment allowed Apate.ai to ship customized fashions supporting a number of languages and regional dialects, assembly their efficiency and security-sensitive necessities.

As well as, we serve a whole lot of fine-tunes for our buyer OpenPipe. Have been they to spin up devoted cases for every of those, their clients’ use circumstances can be infeasible as they develop and evolve their use circumstances and repeatedly re-train their parameter-efficient fine-tunes for optimum output high quality at cost-effective costs.

Thanks for the good interview, readers who want to study extra ought to go to OctoAI.

Leave a Reply

Your email address will not be published. Required fields are marked *