This paper presents the first framework to deliberately train neural networks for accuracy and agreement between feature attribution techniques: PEAR (Post hoc Explainer Agreement Regularizer). In addition to the conventional task loss, PEAR incorporates a correlation-based consensus loss that combines Pearson and Spearman correlation measures, promoting alignment across explainers like Grad and Integrated Gradients. By using a soft ranking approximation to address differentiability issues, the loss function is completely trainable by backpropagation. Tested on three OpenML tabular datasets, multilayer perceptrons trained using PEAR surpass linear baselines in accuracy and explanation consensus, and in certain instances, even compete with XGBoost. The findings advance reliable and interpretable AI by showing that consensus-aware training successfully reduces explanation disagreement while maintaining prediction performance.This paper presents the first framework to deliberately train neural networks for accuracy and agreement between feature attribution techniques: PEAR (Post hoc Explainer Agreement Regularizer). In addition to the conventional task loss, PEAR incorporates a correlation-based consensus loss that combines Pearson and Spearman correlation measures, promoting alignment across explainers like Grad and Integrated Gradients. By using a soft ranking approximation to address differentiability issues, the loss function is completely trainable by backpropagation. Tested on three OpenML tabular datasets, multilayer perceptrons trained using PEAR surpass linear baselines in accuracy and explanation consensus, and in certain instances, even compete with XGBoost. The findings advance reliable and interpretable AI by showing that consensus-aware training successfully reduces explanation disagreement while maintaining prediction performance.

Notes on Training Neural Networks for Consensus

2025/09/21 13:46

Abstract and 1. Introduction

1.1 Post Hoc Explanation

1.2 The Disagreement Problem

1.3 Encouraging Explanation Consensus

  1. Related Work

  2. Pear: Post HOC Explainer Agreement Regularizer

  3. The Efficacy of Consensus Training

    4.1 Agreement Metrics

    4.2 Improving Consensus Metrics

    [4.3 Consistency At What Cost?]()

    4.4 Are the Explanations Still Valuable?

    4.5 Consensus and Linearity

    4.6 Two Loss Terms

  4. Discussion

    5.1 Future Work

    5.2 Conclusion, Acknowledgements, and References

Appendix

3 PEAR: POST HOC EXPLAINER AGREEMENT REGULARIZER

Our contribution is the first effort to train models to be both accurate and to be explicitly regularized via consensus between local explainers. When neural networks are trained naturally (i.e. with a single task-specific loss term like cross-entropy), disagreement between post hoc explainers often arises. Therefore, we include an additional loss term to measure the amount of explainer disagreement during training to encourage consensus between explanations. Since human-aligned notions of explanation consensus can be captured by more than one agreement metric (listed in A.3), we aim to improve several agreement metrics with one loss function.[2]

\ Our consensus loss term is a convex combination of the Pearson and Spearman correlation measurements between the vectors of attribution scores (Spearman correlation is just the Pearson correlation on the ranks of a vector).

\ To paint a clearer picture of the need for two terms in the loss, consider the examples shown in Figure 3. In the upper example, the raw feature scores are very similar and the Pearson correlation coefficient is in fact 1 (to machine precision). However, when we rank these scores by magnitude, there is a big difference in their ranks as indicated by the Spearman value. Likewise, in the lower portion of Figure 3 we show that two explanations with identical magnitudes will show a low Pearson correlation coefficient. Since some of the metrics we use to measure disagreement involve ranking and others do not, we conclude that a mixture of these two terms in the loss is appropriate.

\ While the example in Figure 3 shows two explanation vectors with similar scale, different explanation methods do not always

\ Figure 2: Our loss function measures the task loss between the model outputs and ground truth (task loss), as well as the disagreement between explainers (consensus loss). The weight given to the consensus loss term is controlled by a hyperparameter 𝜆. The consensus loss term term is a convex combination of the Spearman and Pearson correlation measurements between feature importance scores, since increasing both rank correlation (Spearman) and raw-score correlation (Pearson) are useful for improving explainer consensus on our many agreement metrics.

\ Figure 3: Example feature attribution vectors where Pearson and Spearman show starkly different scores. Recall, both Pearson and Spearman correlation range from −1 to +1. Both of these pairs of vectors satisfy some human-aligned notions of consensus. But in each circumstance, one of the correlation metrics gives a low similarity score. Thus, in order to successfully encourage explainer consensus (by all of our metrics), we use both types of correlation in our consensus loss term.

\ align. Some explainers have the sums of their attribution scores constrained by various rules, whereas other explainers have no such constraints. The correlation measurements we use in our loss provide more latitude when comparing explainers than a direct difference measurement like mean absolute error or mean squared error, allowing our correlation measurement.

\

\ We refer to the first term in the loss function as the task loss, or ℓtask, and for our classification tasks we use cross-entropy loss. A graphical depiction of the flow from data to loss value is shown in Figure 2. Formally, our complete loss function can be expressed as follows with two hyperparameters 𝜆, 𝜇 ∈ [0, 1]. We weight the influence of our consensus term with 𝜆, so lower values give more priority to task loss. We weight the influence between the two explanation correlation terms with 𝜇, so lower values give more weight to Pearson correlation and higher values give more weight to Spearman correlation.

\

3.1 Choosing a Pair of Explainer

The consensus loss term is defined for any two explainers in general, but since we train with standard backpropagation we need these explainers to be differentiable. With this constraint in mind, and with some intuition about the objective of improving agreement metrics, we choose to train for consensus between Grad and IntGrad. If Grad and IntGrad align, then the function should become more locally linear in logit space. IntGrad computes the average gradient along a path in input space toward each point being explained. So, if we train the model to have a local gradient at each point (Grad) closer to the average gradient along a path to the point (IntGrad), then perhaps an easy way for the model to accomplish that training objective would be for the gradient along the whole path to equal the local gradient from Grad. This may push the model to be more similar to a linear model. This is something we investigate with qualitative and quantitative analysis in Section 4.

3.2 Differentiability

On the note of differentiability, the ranking function 𝑅 is not differentiable. We substitute a soft ranking function from the torchsort package [3]. This provides a floating point approximation of the ordering of a vector rather than an exact integer computation of the ordering of a vector, which allows for differentiation

4 THE EFFICACY OF CONSENSUS TRAINING

In this section we present each experiment with the hypothesis it is designed to test. The datasets we use for our experiments are Bank Marketing, California Housing, and Electricity, three binary classification datasets available on the OpenML database [39]. For each dataset, we use a linear model’s performance (logistic regression) as a lower bound of realistic performance because linear models are considered inherently explainable.

\ The models we train to study the impact of our consensus loss term are multilayer perceptrons (MLPs). While the field of tabular deep learning is still growing, and MLPs may be an unlikely choice for most data scientists on tabular data, deep networks provide the flexibility to adapt training loops for multiple objectives [1, 10, 17, 28, 31, 35]. We also verify that our MLPs outperform linear models on each dataset, because if deep models trained to reach consensus are less accurate than a linear model, we would be better off using the linear model.

\ We include XGBoost [6] as a point of comparison for our approach, as it has become a widely popular method with high performance and strong consensus metrics on many tabular datasets (figures in Appendix A.7). There are cases where we achieve more explainer consensus than XGBoost, but this point is tangential as we are invested in exploring a loss for training neural networks.

\ For further details on our datasets and model training hyperparameters, see Appendices A.1 and A.2.

\

:::info Authors:

(1) Avi Schwarzschild, University of Maryland, College Park, Maryland, USA and Work completed while working at Arthur (avi1umd.edu);

(2) Max Cembalest, Arthur, New York City, New York, USA;

(3) Karthik Rao, Arthur, New York City, New York, USA;

(4) Keegan Hines, Arthur, New York City, New York, USA;

(5) John Dickerson†, Arthur, New York City, New York, USA ([email protected]).

:::


:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

[2] The PEAR package will be publicly for download on the Package Installer for Python (pip), and it is also available upon request from the authors.

\ [3] When more than one of the entries have the same magnitude, they get a common ranking value equal to the average rank if they were ordered arbitrarily.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Share Insights

You May Also Like

Ethereum Foundation Converts $4.5M ETH to Stablecoins

Ethereum Foundation Converts $4.5M ETH to Stablecoins

The post Ethereum Foundation Converts $4.5M ETH to Stablecoins appeared on BitcoinEthereumNews.com. The Ethereum Foundation (EF) announced plans to convert 1,000 Ether (ETH) into stablecoins to finance research, grants and donations, aligning with its broader treasury strategy and involvement in funding decentralized finance (DeFi) initiatives.  The sale, worth approximately $4.5 million at current prices, was executed via CoW Swap, a decentralized trading protocol that aggregates liquidity across multiple exchanges to offer users competitive prices without relying on a centralized intermediary. Neither the foundation’s announcement nor its treasury policy specified which stablecoins it would receive in exchange for the ETH. Source: Ethereum Foundation This latest conversion follows EF’s earlier disclosure in September that it planned to convert 10,000 ETH into stablecoins over several weeks. However, Friday’s transaction appears to be separate from that initiative, given its smaller scale and use of CoW Swap rather than a centralized exchange. According to the Ethereum Foundation Treasury Policy, EF seeks to “balance between seeking returns above a benchmark rate and extending EF’s role as a steward of the Ethereum ecosystem, with a particular focus on DeFi.” The increased use of stablecoins also comes as EF temporarily paused open grant submissions to its Ecosystem Support Program, citing an influx of applications. The foundation said it will instead prioritize funding for the network’s most pressing needs. In April, EF also announced a leadership restructuring to improve strategic and operational management. The foundation appointed Hsiao-Wei Wang and Tomasz K. StaƄczak as co-executive directors, both of whom previously held roles within EF. In June, the foundation laid off staff and restructured its core development team. Related: ‘Vitalik: An Ethereum Story’ is less about crypto and more about being human Vitalik Buterin doubles down on DeFi Since its launch, Ethereum has remained the leading platform for DeFi applications. Despite growing competition from other blockchain networks, Ethereum still accounts for roughly 68%

Share
BitcoinEthereumNews2025/10/04 18:32
Share
Central Bank of Nigeria set to work on crypto regulation framework with the SEC, governor confirms

Central Bank of Nigeria set to work on crypto regulation framework with the SEC, governor confirms

The post Central Bank of Nigeria set to work on crypto regulation framework with the SEC, governor confirms appeared on BitcoinEthereumNews.com. The Central Bank of Nigeria (CBN) has announced plans to work with the Nigeria Securities and Exchange Commission (SEC) to develop the right regulatory framework for digital assets in the country. This development was revealed by Olayemi Cardoso, the Governor of the CBN, who spoke at a lecture series in Lagos. According to Cardoso, the CBN is expected to partner with the SEC to develop the crypto regulatory framework as they aim to create a sustainable framework for digital assets in the country. At the annual lecture series at the Lagos Business School, Cardoso noted that the future currency policy of the country is expected to be impacted by digital assets, fintech, and blockchain. However, he added that the extent of their influence remains uncertain at this time. The Central Bank of Nigeria will work with the SEC on crypto regulation In his statement, Cardoso claimed that the collaboration is expected to ensure that all different angles of regulation with respect to digital assets are considered. “We are deeply in collaboration to ensure that all the different regulatory authorities can midwife the process that is sustainable with respect to digital currency,” he said. He mentioned that Nigeria had gained global attention in the crypto space years ago. The CBN governor also mentioned that while the country has gained quite a reputation for its crypto exploits, there have been talks about regulations since then. He also recalled two years ago when the country gained global attention after regulators faced challenges in controlling crypto exchange markets. “Suddenly, over a period of time, coin exchange became very difficult to protect. Many people, not just youngsters, turned to crypto, and a whole architecture started to evolve,” he said. As previously reported by Cryptopolitan, the Central Bank of Nigeria, in early 2021, ordered traditional banks

Share
BitcoinEthereumNews2025/10/04 18:22
Share