AdaMix, a parameter-efficient fine-tuning method, outperforms full model fine-tuning in few-shot NLU tasks across benchmarks like GLUE. Using prompt-based strategies without extra validation or unlabeled data, AdaMix consistently boosts performance with both BERT and RoBERTa encoders, demonstrating stability and efficiency in few-shot scenarios.AdaMix, a parameter-efficient fine-tuning method, outperforms full model fine-tuning in few-shot NLU tasks across benchmarks like GLUE. Using prompt-based strategies without extra validation or unlabeled data, AdaMix consistently boosts performance with both BERT and RoBERTa encoders, demonstrating stability and efficiency in few-shot scenarios.

Smarter AI Training with Few-Shot Natural Language Tasks

2025/10/02 17:00

Abstract and 1. Introduction

  1. Background

    2.1 Mixture-of-Experts

    2.2 Adapters

  2. Mixture-of-Adaptations

    3.1 Routing Policy

    3.2 Consistency regularization

    3.3 Adaptation module merging and 3.4 Adaptation module sharing

    3.5 Connection to Bayesian Neural Networks and Model Ensembling

  3. Experiments

    4.1 Experimental Setup

    4.2 Key Results

    4.3 Ablation Study

  4. Related Work

  5. Conclusions

  6. Limitations

  7. Acknowledgment and References

Appendix

A. Few-shot NLU Datasets B. Ablation Study C. Detailed Results on NLU Tasks D. Hyper-parameter

A Few-shot NLU Datasets

Data. In contrast to the fully supervised setting in the above experiments, we also perform fewshot experiments following the prior study (Wang et al., 2021) on six tasks including MNLI (Williams et al., 2018), RTE (Dagan et al., 2005; Bar Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009), QQP[1] and SST-2 (Socher et al.). The results are reported on their development set following (Zhang et al., 2021). MPQA (Wiebe et al., 2005) and Subj (Pang and Lee, 2004) are used for polarity and subjectivity detection, where we follow (Gao et al., 2021) to keep 2, 000 examples for testing. The few-shot model only has access to |K| labeled samples for any task. Following true few-shot learning setting (Perez et al., 2021; Wang et al., 2021), we do not use any additional validation set for any hyper-parameter tuning or early stopping. The performance of each model is reported after fixed number of training epochs. For a fair comparison, we use the same set of few-shot labeled instances for training as in (Wang et al., 2021). We train each model with 5 different seeds and report average performance with standard deviation across the runs. In the few-shot experiments, we follow (Wang et al., 2021) to train AdaMix via the prompt-based fine-tuning strategy. In contrast to (Wang et al., 2021), we do not use any unlabeled data.

\

B Ablation Study

\ Table 11: Ablation study demonstrating the impact of parameter sharing in AdaMix adapter framework.

\

C Detailed Results on NLU Tasks

The results on NLU tasks are included in Table 1 and Table 13. The performance AdaMix with RoBERTa-large encoder achieves the best performance in terms of different task metrics in the GLUE benchmark. AdaMix with adapters is the

\ \ Table 12: Varying the bottleneck dimension of adapters in AdaMix with BERT-base and RoBERTa-large encoder. * denotes the bottleneck dimension used in AdaMix with adapters.

\ \ only PEFT method which outperforms full model fine-tuning on all the tasks and on average score. Additionally, the improvement brought by AdaMix is more significant with BERT-base as the encoder, demonstrating 2.2% and 1.2% improvement over the performance of full model fine-tuning and the best performing baseline UNIPELT with BERTbase. The improvement is observed to be consistent as that with RoBERTa-large on every task. The NLG results are included in Table 4 and 5.

D Hyper-parameter

Detailed hyper-parameter configuration for different tasks presented in Table 15 and Table 16.

\

:::info Authors:

(1) Yaqing Wang, Purdue University ([email protected]);

(2) Sahaj Agarwal, Microsoft ([email protected]);

(3) Subhabrata Mukherjee, Microsoft Research ([email protected]);

(4) Xiaodong Liu, Microsoft Research ([email protected]);

(5) Jing Gao, Purdue University ([email protected]);

(6) Ahmed Hassan Awadallah, Microsoft Research ([email protected]);

(7) Jianfeng Gao, Microsoft Research ([email protected]).

:::


:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

[1] https://www.quora.com/q/quoradata/

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Botanix launches stBTC to deliver Bitcoin-native yield

Botanix launches stBTC to deliver Bitcoin-native yield

The post Botanix launches stBTC to deliver Bitcoin-native yield appeared on BitcoinEthereumNews.com. Botanix Labs has launched stBTC, a liquid staking token designed to turn Bitcoin into a yield-bearing asset by redistributing network gas fees directly to users. The protocol will begin yield accrual later this week, with its Genesis Vault scheduled to open on Sept. 25, capped at 50 BTC. The initiative marks one of the first attempts to generate Bitcoin-native yield without relying on inflationary token models or centralized custodians. stBTC works by allowing users to deposit Bitcoin into Botanix’s permissionless smart contract, receiving stBTC tokens that represent their share of the staking vault. As transactions occur, 50% of Botanix network gas fees, paid in BTC, flow back to stBTC holders. Over time, the value of stBTC increases relative to BTC, enabling users to redeem their original deposit plus yield. Botanix estimates early returns could reach 20–50% annually before stabilizing around 6–8%, a level similar to Ethereum staking but fully denominated in Bitcoin. Botanix says that security audits have been completed by Spearbit and Sigma Prime, and the protocol is built on the EIP-4626 vault standard, which also underpins Ethereum-based staking products. The company’s Spiderchain architecture, operated by 16 independent entities including Galaxy, Alchemy, and Fireblocks, secures the network. If adoption grows, Botanix argues the system could make Bitcoin a productive, composable asset for decentralized finance, while reinforcing network consensus. This is a developing story. This article was generated with the assistance of AI and reviewed by editor Jeffrey Albus before publication. Get the news in your inbox. Explore Blockworks newsletters: Source: https://blockworks.co/news/botanix-launches-stbtc
Share
BitcoinEthereumNews2025/09/18 02:37
The FDA Is Trying To Make Corporate Free Speech Situational

The FDA Is Trying To Make Corporate Free Speech Situational

The post The FDA Is Trying To Make Corporate Free Speech Situational appeared on BitcoinEthereumNews.com. BENSENVILLE, ILLINOIS – SEPTEMBER 10: Flanked by U.S. Attorney General Pam Bondi (rear), and FDA Commissioner Marty Makary (R), Secretary of Health and Human Services Robert F. Kennedy Jr. speaks to the press outside Midwest Distribution after it was raided by federal agents on September 10, 2025 in Bensenville, Illinois. According to the company, various e-liquids were seized in the raid. (Photo by Scott Olson/Getty Images) Getty Images While running for President in 2008, Barack Obama famously chanted “Yes we can.” Love or hate his political views, Obama’s politics were quite effective. He was asking voters to think big, to envision a much better future. Advertisers no doubt approved. That’s because ads routinely evoke things not as they are, but as they could be. Gyms and exercise equipment companies don’t promote their locations and equipment with flabby, lumbering people, rather their ads show fit, upright, energetic individuals. A look ahead. Restaurants do the same with ads showing happy people enjoying impressively put together plates of food. Conversely, ads meant to convince smokers to quit have not infrequently shown the worst of the worst future downsides of the habit. The nature of advertising comes to mind as FDA commissioner Marty Makary puzzlingly brags that “The Trump Administration Is Taking On Big Pharma” in the New York Times. Makary laments pharmaceutical ads that “are filled with dancing patients, glowing smiles and catch jingles that drown out the fine print.” Not explained is whether Makary would be happier if drug companies placed ads with immobile patients, frowns, and funereal music. Seriously, what does he expect? Does he want drug companies to commit billions to drug development to accompany their achievements with imagery defined by misery? Has Makary stopped to contemplate the myriad shareholders lawsuits drugmakers would face if, upon risking staggering sums meant…
Share
BitcoinEthereumNews2025/09/18 06:29