Researchers at the University of California San Diego have designed a new technique that enables large AI models to learn tasks using much less data and computing capacity. These models, which power chatbots and biological analysis tools, can now be adapted more efficiently without massive resources.
AI models consist of billions of parameters that control how they interpret information. Conventional fine-tuning adjusts every parameter, a process that often costs more and risks overfitting, where a system memorizes rather than comprehends patterns. This issue reduces accuracy on unfamiliar tasks.
The UC San Diego team’s approach refines the process by modifying only crucial components instead of retraining the entire system. Their strategy minimizes costs while boosting flexibility and generalization, outperforming traditional fine-tuning methods.
The team demonstrated their method on protein-based AI models that predict protein traits with minimal training samples. In testing, the system exceeded conventional methods in forecasting whether peptides could cross the blood-brain barrier while using 326 times fewer parameters. It also maintained similar accuracy in predicting protein thermostability with 408 times fewer parameters.
Professor Pengtao Xie from the university’s electrical and computer engineering department stated that the innovation allows smaller labs and startups to customize AI models without expensive infrastructure or massive datasets. He emphasized that this breakthrough marks a step toward making artificial intelligence more accessible to all.
