Papers

23/08/2024 Articles

The Automation of Tax Controls in Italy: the Role of the Procedure Manager and the Need for Explainability

Anteprima alla pubblicazione del volume speciale 2024/17

By Giovanni Consolo, Giorgio Hassan

Index: 1. Possible Digital Shift in Tax Decision-Making and the Issue of Liability 1.1. Ve.R.A..: An AI-driven application for Taxpayers Risk Analysis 1.2. Digital Transformation of Decision-Making Processes 2. The Central Role of the Procedure Manager 3. The Significance of Personal Data Protection Legislation and Interventions by the Data Protection Authority Concerning New Automated Procedures to Combat Tax Evasion 3.1. The Right to the “Intelligibility” of the Algorithm 3.2. The Right Not to Be Subject to Decisions Based Solely on Automated Processing 3.3. Verification of Analysis Models, Data Pseudonymization Techniques, and Other Security Measures for Data Processing 4. Towards an Explainable Artificial Intelligence: Possible Solutions. 4.1. The place for XAI within the Italian legal framework 4.2. An alternative use for XAI: the place for explainability in internal risk management. 5. Conclusions


Abstract

 

***

 

The automation of tax controls is having a significant impact on the framework tax law and the structure of tax administrations worldwide. In Italy, the recent implementation of an advanced system of tax risk profiling ("V.e.R.A.")  might open the way to the automation of tax proceedings, exposing taxpayers to the potential risk of errors and discrimination. In this paper, we will address the upcoming changes in the framework of Italian law, by emphasizing the integration between tax law, data protection law, and AI in adapting the scope of tax proceedings to the progressive automation of tax controls. In addition, we will focus on the invaluable role of human decision-makers, with specific reference to the sc. "procedure manager", in ensuring a sound integration between humans and AI in the realm of automated tax controls.

 

1. Possible Digital Shift in Tax Decision-Making and the Issue of Liability

In the case Malone v. United Kingdom (1984),[1] the European Court of Human Rights’ Judge Louis-Edmond Pettiti prophetically affirmed that “the danger threatening democratic societies in the years 1980-1990 stems from the temptation facing public authorities to ‘see into’ the life of the citizens […]. In order to answer the needs of planning and of social and tax policy, the State is obliged to amplify the scale of its interferences. In its administrative systems, the State is being led to proliferate and then to computerize its personal data-files” leading to the creation of a “profile” for each citizen[2].

As Judge Petitti predicted forty years ago, the indiscriminate, widespread collection of personal data by public authorities, in an era of ongoing technological development, might lead to a “digital panopticon” where the public power observes everyone while remaining unnoticed.[3]

Though this problem might not be apparent in the realm of tax controls,[4] there might be several reasons to argue otherwise.[5] In fact, the constant automation in the functioning of public affairs has two fundamental consequences on the structure of the tax administration: 

  1. Reduction in Non-Digital Interactions:[6] for many taxpayers, the shift towards digital-only interactions has significantly restricted their ability to manage their dealings with tax authorities in any non-digital format. This shift has allowed the Tax Administration to amass extensive data in specialized databases, enhancing the scope and quality of its surveillance and assessment capabilities.
  2. Use of sophisticated Data Analysis Techniques in tax audits:[7] the Tax Administration now extensively utilizes advanced data analysis methodologies to perform large-scale tax risk analysis. These methodologies involve statistical models and automated processes, which are increasingly incorporating artificial intelligence to refine the accuracy and efficiency of taxpayer risk assessments.

In Italy, the use of data analytics in the realm of tax controls is addressed by Fiscal Law no. 111/2023, which specifically empowers the government to “enhance the use of digital technologies, [by] employing systems of artificial intelligence, to counteract phenomena of tax evasion and avoidance” (Art. 17).

As preliminary investigations reveal, this evolving framework poses multiple challenges both on the traditional institutions of domestic tax law, and to the protection of fundamental rights.[8] These challenges are amplified by concerns such as the opacity of algorithmic processes, the exposure of sensitive personal data, the risks linked to algorithmic profiling, and the possible lack of human oversight, which are intrinsic to the use of automated procedures in administrative proceedings.[9]

A particular focus is placed on an algorithmic system known by the acronym “Ve.Ra”,[10] which is intended to allow the Italian Tax Administration to cross-reference and process, through advanced techniques based on artificial intelligence, all the information contained in the databases at its disposal. Through this system, the Revenue Agency will be able to select taxpayers for audits and foster spontaneous compliance.

In this paper, the authors will try to address such developments, by analyzing whether the evolution of artificial intelligence, both from a legal and technical perspective, can lay the groundwork for the automation of tax controls within the Italian legal framework.

1.1. Ve.R.A..: An AI-driven application for Taxpayers Risk Analysis

The Ve.R.A. system is regulated by a series of provisions of the 2020 Italian Budget Law[11] and was implemented in June 2022 by a decree of the Ministry of Economy and Finance (MEF),[12] following the approval of the Italian Data Protection Authority (“DPA”). The available information on Ve.R.A. is relatively sparse. In recent discussions with the DPA, the Italian Revenue Agency underscored the need to maintain secrecy on the functioning of Ve.R.A. to preserve the effectiveness of tax controls.[13]

This necessity hints at the immense potential of Ve.R.A., which on one hand, will have access to all the databases available to the Tax Administration,[14] and on the other hand, will be capable of processing the vast amount of information found within these databases. This system aims at targeting “tax-evading” individuals by conducting probabilistic analyses based on evidence from prior audits.

Specifically, according to the information provided by the MEF[15], the functioning of Ve.R.A. will be based on the interaction of two different models, namely:

  1. deterministic analysis models, which are intended to reconstruct the assets and financial flows of individual taxpayers; specifically, these models can highlight any discrepancies by cross-referencing information such as the declared income and the level of consumption attributable to individual taxpayers;
  2. probabilistic (or stochastic) analysis models, which are designed to profile individual taxpayers by assigning an average “fiscal risk score” to each taxpayer, based on observational and data modeling processes.

From the limited indications provided by the Italian Tax Administration, it appears that the risk scores provided by Ve.R.A. will not trigger automated tax assessments. Rather, the aim of this system is to distribute public resources more efficiently, by enabling the Tax Administration to focus on “high-risk” profiles while excluding taxpayers with low or otherwise explainable levels of risk. This approach aims at directing tax controls at cases with higher investigative needs, in accordance with Art. 12 of the Taxpayer’s Rights Statute.

However, it has been suggested that this new application might soon outweigh human intervention, leading to the (full) automation of the audit process.[16] For this reason, the academic discussion has stressed the importance of harmonizing the use of Ve.R.A. with the provisions of tax law (such as the rules that regulate the audit decision-making process) that should limit such extensive use of artificial intelligence in assessment activities.

1.2. Digital Transformation of Decision-Making Processes

In administrative law, it has long been established that the use of “digital variants” in the conduction of administrative decisions must adhere to the “traditional” rules and principles of domestic law. Particularly, it has been noted that the “depersonalization” of algorithmic decision-making should not outweigh the accountability of public officials in individual proceedings. It has also been clarified that public authorities should be liable both for the flaws arising from poor algorithmic risk management, and for the errors and defects attributable to the programming phase of the system[17].

This position is reflected by several rulings from the Council of State,[18] which has established a set of principles that should regulate the use of algorithmic systems by the Public Administration.

  1. The principle of algorithmic transparency or intelligibility: this principle aims at ensuring that the algorithmic decision-making process is interpretable by both the controllers and addressees of an individual proceeding.
  2. The principle of non-exclusivity, or human intervention, in the decision-making process (so-called “human in the loop”): this principle ensures that human decision-making influences, or otherwise complements, the outcome of an automated individual decision.
  3. The principle of non-discrimination: this principle is aimed at ensuring that the use of algorithmic tools complies with the principles of equality and equal treatment.

The rationale behind these rulings is primarily based on the European data protection regulation (GDPR, Reg. 2016/679) and, to a lesser extent, on Italian Administrative Law (see, particularly, Law no. 241/1990). Apparently, administrative judges have referred to European data protection law not only to build a “safety net” against the automation of administrative proceedings, but perhaps also to highlight that the framework of administrative law should parallel the evolution of AI and new technologies. 

These developments suggest the need for appropriate policy measures to ensure that administrative processes remain fair, transparent, and accountable. In fact, the integration of digital tools into the public administration raises significant concerns about the depersonalization of administrative decisions. Therefore, it is necessary to strike a balance between innovation and the protection of fundamental rights, to preserve the legitimacy and acceptance of algorithm-driven public governance.

In conclusion, the implementation of AI in administrative practices requires the adaptation of the existing legal framework to the modern complexities introduced by these technological advancements; in this context, the principles of transparency, non-exclusivity and non-discrimination play a pivotal role, by ensuring the protection of individual fundamental rights in the context of automated decision-making.

 

2. The Central Role of the Procedure Manager

The most delicate aspect of algorithmic decision-making is that artificial intelligence systems are often indecipherable without adequate computational expertise.[19] This problem could lead the Tax Administration to retain their information on all the necessary elements to decipher the functioning tax risk profiling and the various steps that led to a specific decision. In fact, if human officers are not capable of understanding the software operations themselves, they might avoid challenging the output of the system. This might lead them to provide explanations that do not refer to the reasoning behind the case at hand, but rather to common templates, in violation of the principle of “non-exclusivity” or human intervention in automated decision-making.

Therefore, the upcoming challenge for tax administrations will be to ensure that basic legal institutions (such as the duty to provide individual hearings or explanations for tax assessments) do not remain mere formalities by result of the transition from human to automated decision-making.

From this perspective, it seems crucial to deeply revisit the role of the procedure manager to ensure that the officials responsible for AI-based tax proceedings are equipped with adequate and proven competencies in both legal and computational fields. This would foster a “virtuous” dialogue between the Tax Administration and the system, ensuring human intervention over the outcome of algorithmic decisions. Through such interaction, we could ensure that automated tax assessments comply with the principles of “transparency”, “non-exclusivity” and “non-discrimination” of algorithmic decision-making, in accordance with the case-law of the Italian Council of State.

An adequately trained procedure manager will indeed be able to: (i) efficiently conduct verification, evaluation, and validation processes in relation to  automated individual decisions, without succumbing to the “overwhelming force of the algorithm”; (ii) verify the correct execution of the logical operations performed by the program and intercept any errors or programming defects; (iii) disclose the operational rules and outcomes of the algorithm during the motivation phase or in adversarial proceedings; and, finally, (iv) in cases where an administrative decision must undergo a second-degree procedure (such as a self-protection procedure)[20], reset the machine and either address the errors made in the previous decision or resort to a completely human revision of the algorithmic decision.

Hence, if we (really) are to see the use of artificial intelligence systems in tax proceedings, a major focus should be put on the role of the procedure manager.

This might even require regulatory level adjustments, and an explicit statement that their absence may provide grounds for invalidating the acts of the Tax Administration.

Moreover, to ensure that the designation of such officials is not a sole formality, it will no longer be sufficient for the designated officials to be merely a technical-tax expert. Instead, they must possess proven technical-computational skills to ensure the correct functioning of the machine and the accurate interpretation of algorithmic decisions.

A more realistic alternative (and, in this regard, the Spanish system could serve as a model for the Italian legislator[21]) might be to configure a sort of “dual-faced” figure. This model would involve both the procedural manager, as the tax official who materially instructs the practice, and, in a supporting role, an official from the “Technology and Innovation” department of the Tax Administration, or from another department that oversees, at a central or peripheral level, the development of the algorithmic systems employed by the Italian Revenue Agency. Indeed, the complexity of the context requires transversal and multidisciplinary skills, which seems excessive – if not impossible – to find in a single tax official.

 

3. The Significance of Personal Data Protection Legislation and Interventions by the Data Protection Authority Concerning New Automated Procedures to Combat Tax Evasion

As mentioned in the previous sections, the approval of D.M. 28/06/2022 (i.e., the ministerial decree that established the technical rules for the implementation of Ve.R.A.) was followed by a series of discussions between the Revenue Agency, serving as the data controller, and the Italian DPA.[22]

Technically, the Agency first conducted an “impact assessment” pursuant to Article 35 GDPR, and subsequently submitted this evaluation to the DPA in accordance with the following Article 36 GDPR.[23] Through the provision no. 276/2022, the Italian DPA authorized the use of Ve.R.A., subject to a set of significant conditions and reservations.

Primarily, the scope of the authorization was limited to a “testing phase”, to be conducted “on a representative random sample” of the population. Secondly, the DPA used its corrective powers – as provided by Article 58, paragraph 1, letter d), of the GDPR[24] – to require, at the stage of testing, that the Revenue Agency implements a series of significant measures to ensure compliance with the principles of “transparency”, “non-exclusivity” and “non-discrimination”.

These measures emphasize the critical importance of the provisions of data protection law, which could trigger potential civil suits and even claims before a tax court.

For this reason, it is now necessary to focus on the main technical measures that the Italian DPA imposed on the Revenue Agency through provision no. 276/2022. In the next section, we will specifically address the scope and objectives of such measures, while focusing on the legal basis of each measure under the rules and principles of the GDPR.

3.1. The Right to the “Intelligibility” of the Algorithm

Firstly, the DPA required the Revenue Agency to “collect opinions” on the functioning of Ve.R.A. from various stakeholders (such as trade associations and professional orders), and “publish an excerpt of [its] impact assessment”, while excluding the “attachments” and “parts that might compromise the security of the treatments”.[25]

It is worth noting that the adoption of these measures would be unnecessary if the implementation of Ve.R.A. was limited at the stage of tax-risk-analysis. In fact, tax-risk-analysis is an activity with no external relevance, which is subject to the complete discretion of the Tax Administration. Therefore, under Italian Tax Law, the opacity underlying the functioning of Ve.R.A. should not raise concerns, due to the high level of secrecy and discretion[26] that marks this phase of the audit process. 

However, under the provisions of the GDPR, the fiscal interest in maintaining secrecy on the functioning of tax-risk-analysis must be carefully balanced[27] with the right to algorithmic “intelligibility”, as provided by Articles 13, 14, and 15 of the GDPR. More precisely, the right to algorithmic intelligibility provides that the subject of an automated individual decision has the right to receive meaningful information about the logic involved in data processing, in order to understand it.

In this respect, Article 23 of the GDPR stipulates that any exemption to this right must comply with the principle of proportionality. This principle – initially formulated by German scholars and subsequently incorporated into the EU legal framework – broadly stipulates that legislators and public administrations cannot impose obligations and/or restrictions on the rights and freedoms of private individuals protected by Union law to an extent greater than the strictly necessary to achieve the purpose set by the national authority.[28]

Traditionally, the principle of proportionality comprises three requirements, or “sub-principles” (so-called “three-step theory”), which must be integrated for a measure to comply with the principle of proportionality: “suitability”, “necessity”. and “adequacy” (or “proportionality in the strict sense”)[29]-[30]: (i) “suitability” requires assessing whether a measure is likely to achieve the intended result; (ii) “necessity” demands selecting, among the various suitable measures to achieve the intended result, the one that involves the least sacrifice of EU principles; (iii) “adequacy” requires a comparison (or “balancing”) between public and private interest, and provides that the measure, though “suitable” and “necessary”, must not unduly burden its addresses.[31]

Assuming that Ve.R.A. constitutes a “suitable” and “necessary” measure to preserve the effectiveness of tax controls, the obligations imposed by the DPA on the Italian Revenue Agency seem precisely aimed at complying with the “sub-principle” of “adequacy”, in order to strike a balance between secrecy and proportionality in the context of algorithmic profiling.

3.2. The Right Not to Be Subject to Decisions Based Solely on Automated Processing

The second technical measure imposed by the DPA on the Revenue Agency concerned the “adequate training of [the] personnel involved” in the use of Ve.R.A. This measure aims at safeguarding the “right not to be subject to decisions based solely on automated processing” as found in Article 22 of the GDPR (subject to the exception provided under the subsequent Article 23, namely the principle of proportionality).

More specifically, art. 22 GDPR stipulates that, in the context of an automated decision-making process, there must always be human intervention in reviewing, validating, or refuting the decision made by the machine (sc. “human in the loop”).

Despite the ministerial decree and the impact assessment both stated that “human intervention is always guaranteed in the processes of forming analysis datasets and control”, the DPA warned that tax officers might “find it more prudent not to oppose the system's results, thus nullifying the guarantee connected to human intervention in the decision-making process”.[32]  This problem could be further emphasized if such officials did not possess proven competencies in managing, analyzing and monitoring the functioning of the algorithmic systems developed by the Revenue Agency. 

Hence, this obligation aligns with our previous considerations (supra, section 2), as to the necessity of revisiting the role of the procedure manager and ensuring that the officials responsible for potential future tax proceedings conducted via AI are equipped with adequate and proven competencies in both legal and computational fields.

3.3. Verification of Analysis Models, Data Pseudonymization Techniques, and Other Security Measures for Data Processing

The Italian DPA also required the Italian Revenue Agency to adopt effective processes to verify the quality of their models, apply data pseudonymization techniques, and implement security measures for processing data related to electronic billing, by providing periodic reports about the metrics used, the activities conducted, any issues encountered, and the security measures and techniques applied in practice.

These precautions are intended to uphold the general principles of “data minimization”, “storage limitation” and “integrity and confidentiality” of data, as provided by Articles 5 and 6 of the GDPR. 

Since Ve.R.A. will process and analyze data pertaining to a large set of taxpayers, it is critical to ensure that the security measures and techniques that they have implemented can effectively mask the identities of individuals and prevent data breaches in case of unauthorized access to the computer systems of the Revenue Agency.

The guidance provided by the DPA on data pseudonymization and other security measures for data processing is highly technical and specific. Nevertheless, official sources have reported that among these measures, the Revenue Agency: 

(i) will systematically apply pseudonymization techniques[33] aimed at preventing  the “mass reidentification” of taxpayers, and specifically, the risk “that through the cross-referencing of multiple non-identified variables included in the analysis dataset and the original database”, the model will be able to “reconstruct with a single operation all the original tax codes (or a significant number of them);[34]

(ii) in the definition of the models of analysis, it will not use the so-called “special” data (Art. 9 of the GDPR), such as those related to health status, sexual orientation, ethnic origin, political and religious orientations, etc.; moreover, the Revenue Agency will also exclude the data concerning the taxpayer’s country of birth; 

(iii) in the adoption of stochastic-probabilistic analysis models (for example, the classification of a subject as high or low risk profiles based on the outcome of previous investigative or assessment activities conducted against other subjects), it will only utilize “historical information related to processes that have completed their administrative life cycle”, thus excluding information retrievable from ongoing tax proceedings; 

(iv) to ensure transparency in relation to the risk analysis activity and the profiling techniques, it will publish a periodically updated special informative note on the logic used by the algorithms, as well as on the databases used to perform tax risk analysis, on the institutional website of the Revenue Agency; 

(v) it will not contemplate fully automated acts of assessment, as each phase of the risk analysis process will include the intervention of one or more operators.

It is worth noting that the technical measures underscored in these five points address some of the problems we have identified in the previous sections.

By imposing specific transparency requirements and human intervention in the conduction of tax assessment, the Revenue Agency lays the foundation for implementing AI technologies in accordance with the standards set by the Italian tax legal framework and the principle of proportionality. 

Yet, the legitimacy of algorithmic techniques of risk analysis lies on a specific condition that prevails over the remaining others, namely the presence of human intervention in individual tax proceedings (supra, point v).

In fact, as widely discussed in section 2, human intervention is needed to address the complexity of audit decision-making, and to prevent fallacies or information asymmetry in relation to the outcome of algorithmic profiling. Put differently, the current legal framework can only allow further automation in the decision-making process if automated decision-making does not outweigh human intervention in individual audit decisions. 

The measures undertaken by the Revenue Agency fail to provide a reliable basis for preventing or limiting automated decision-making, despite the warnings of the Italian DPA and legal scholarship in this respect. In the next section we will try to address these shortcomings, by analyzing whether the current developments in the field of artificial intelligence can help harmonizing the potential automation of tax proceedings with the requirements set by the principles of administrative and data protection law. 

 

4. Towards an Explainable Artificial Intelligence: Possible Solutions.

The increasing adoption of algorithmic systems by Tax Authorities requires an adequate understanding of their functioning by tax officers and taxpayers alike. 

In fact, the inability to interpret the functioning of AI models could trigger severe organizational problems, and potentially undermine the rights of individual taxpayers in individual tax proceedings. 

This problem further enhances the need to constrain the use of AI systems under the standards set by the principles of European and domestic law. More precisely, the implementation of AI techniques in tax controls should happen under the oversight and discretion of human officers, which in turn, should maintain control over the audit decision-making process. 

Apparently, these concerns are well reflected in the requirements set by Italian tax law. As we have mentioned in the previous sections, Italian law does not conceive algorithmic profiling as a major component of the decision-making process, but rather as a mean to foster spontaneous compliance. More precisely, Art. 5 of D.M. 28/06/2022 provides that the outcome of tax risk analysis is always subject to the oversight and discretion of human officers, which autonomously decide whether (or not) to issue an act of assessment to the “high-risk” profiles detected by the system.[35]

However, the Italian DPA suggested that human officers might soon rely on the system’s predictions without further intervening in the decision-making process (supra), [36] leading to a violation of the principles of European and domestic law.

This scenario could lead to even further problems in light of the current developments in the interpretation of the notion of automated decision-making by the CJEU. More specifically, in the recent SCHUFA ruling,[37] the CJEU has stated that the outcome of a [credit] scoring system should be labeled “automated decision-making” under art. 15, GDPR, whenever it plays a determinant role on the outcome of a decision, even if it is not the sole component of the decision-making process.[38]

This interpretation emphasizes the need to evaluate the weight of human decision-making, compared to automated decision-making, in the realm of tax assessments: in fact, an extensive reliance on the outcome of algorithmic profiling could trigger the application of Art. 15 GDPR, and bind Tax Authorities to comply with the obligations set for automated individual decisions.

It is worth noting that the developments reached by the SCHUFA case have also concerned this latter point, namely the obligations that apply to the providers of automated individual decisions.

More specifically, in the Advocate General opinion over the SCHUFA case, Adv. Gen. Pikamae asserted that the “right to receive an explanation” for the outcome of automated decision-making should mention, at least on an “aggregate basis”, the variables detected by the system in the scoring process.[39] Despite not being directly addressed by the SCHUFA ruling, this conclusion could trigger a broader interpretation of the right to receive an explanation, laying the groundwork for the implementation of automated decision-making in accordance with the principles of European law.[40]

In the realm of tax law, the solution envisioned by the Adv. General would require tax officers to provide taxpayers with reliable explanations on the functioning of the system. Yet, tax officers might be unable to “carve” such explanations out of the system, as the use of AI is notably faced by an “efficiency vs explainability” trade-off.

A possible solution to this problem could be found within the realm of XAI (“explainable artificial intelligence”).[41]This field of study analyses various techniques that provide reliable estimates of the variables that determined the outcome of an algorithmic decision-making process.

Through such techniques, the users of AI models (including, inter alia, Tax Authorities) could comply with the requirements set by the CJEU, and specifically, with the duty to provide explanations for the outcome of an automated individual decision. By giving reliable estimates on the variables related to individual tax scores, Tax Authorities could provide taxpayers with insights on the legal grounds of their assessment notice, while ensuring that the outcome of automated individual decisions is reliable and fair. 

Accordingly, in the upcoming future Tax Authorities could use these techniques to comply with the legal requirements set forth in the previous sections. 

However, at the current state of technological development, the integration of XAI techniques in our legal system is faced by several challenges, as we will broadly discuss in the following section.

4.1. The place for XAI within the Italian legal framework

The implementation of XAI techniques in the realm of tax law faces significant challenges from both a legal and technical perspective.

From a technical perspective, the outcome of XAI estimates has proven to be reliable only under specific circumstances, while being fallacious in several other cases.[42]

From a legal perspective, lawmakers should clarify several aspects on the functioning of XAI explanations, namely: (i) if the content of XAI explanations should determine the issuance of an act of assessment, and (ii) if, and when, taxpayers should be entitled to acknowledge the content of such explanations. 

As for the first question, we believe that decision-makers should only deem an algorithmic decision as “valid” if it was proven to relate to a set of relevant variables, while possessing a minimum correlation with discriminatory or irrelevant variables (such as age, sex, or ethnicity). Based on this distinction, tax officers could rely on the content of XAI explanations to determine whether to notify taxpayers with an act of assessment, depending on the variables that determined the outcome of such individual decision.

As for the second question, we believe that lawmakers should specify whether taxpayers would be entitled to access the content of XAI explanations.

Naturally, this choice depends on the level of automation of the decision-making process: the higher the emphasis on automated decision-making, the higher the need to provide insights on the reasoning applied by the algorithms. Accordingly, taxpayers should be granted access to the content of XAI explanations whenever algorithmic decision-making was proven to be determinant for the outcome of an individual assessment.

At this point, lawmakers should also identify the stage of the decision-making process where taxpayers would be able to access the content of XAI explanations. 

Hypothetically, XAI explanations could be used by Tax Authorities in individual proceedings, to provide taxpayers with information of the reasoning behind their risk score. In this case, the outcome of XAI estimates should be disclosed to individual taxpayers upon notification of their act of assessment, as a substitute or integration of “traditional explanations”. 

Alternatively, XAI techniques could be used by Tax Authorities in court, to prove the reliability of their models against claims of unfairness or discrimination.[43]

However, we should note that in the latter case, Tax Authorities would only disclose the reasoning of the model after the issuance of an act of assessment, in blatant violation of the provisions of Italian tax law.

For these reasons, the outcome of XAI explanations could only be used upon notification of an act of assessment, as suggested in our former hypothesis. In this scenario, tax officers would use XAI explanations as a mean to provide taxpayers with valuable information on the decision reached by the model, thus “integrating” the content of traditional explanations. This use of XAI would allow further automation in the conduction of tax audits, while preventing any possible violation on the right to receive an explanation in individual assessments.

However, although this solution could be extremely valuable in the upcoming future, it is currently faced by several constraints. 

On one hand, the technical limitations that affect the field of XAI provide uncertainty as to whether these techniques could provide “reliable” information on the functioning of AI models. On the other hand, the disclosure of detailed information on the functioning of algorithmic profiling could undermine the effectiveness of tax controls, by providing taxpayers with insights that they could use to predict the reasoning of the algorithms (supra).[44]

For these reasons, the use of XAI techniques in individual proceedings is inevitably precluded by both technical and legal limitations. Yet, tax officers might still be able to exploit the advantages of XAI as a mean to prevent, rather than advance, the progressive automation of tax audits. More specifically, XAI explanations could serve as a valuable mean to enhance the role of human decision-making in individual assessments, by providing insights on the functioning of the algorithms in a prior stage of the decision-making process, i.e. the stage of algorithmic profiling. In fact, XAI explanations could allow tax officers to interpret the output of the algorithms and provide a meaningful contribution to the outcome of the decision-making process, in accordance with the principles of European and domestic law. 

4.2. An alternative use for XAI: the place for explainability in internal risk management.

Although XAI techniques could help foster explainability in automated decision-making, there are technical and practical constrains that make such techniques unable to supplant traditional explanations in individual tax proceedings. 

At the current state of scientific development, the uncertainty and inconsistency that surrounds the implementation of XAI makes it incompatible with the requirements set by the provisions of Italian tax law. Furthermore, the disclosure of detailed information on the functioning of the algorithms could undermine the effectiveness of tax controls by revealing too much information to individual taxpayers (supra).

For these reasons, XAI explanations are currently unable to match the progressive automation of tax audits, enhancing the need for human intervention in the decision-making process. Put differently, the field of XAI does not currently provide a reliable basis for the full automation of tax audits, but rather emphasizes the need to promote human oversight and discretion, to meet the standards set by the principle of European and domestic law.

To this end, we have repeatedly emphasized that the Revenue Agency should “make up” for the unfeasibility of individual explanations by (i) providing human intervention in the decision-making process and ii) implementing specific measures to enhance transparency in relation to the functioning of algorithmic profiling.

In our view, XAI explanations could help ensuring that the use of AI in tax controls complies with these specific requirements, by enhancing (i) human intervention and (ii) transparency in algorithmic profiling.

More specifically, XAI explanations could play a fundamental role at the stage of internal risk management, by providing reliable indications, especially at an aggregate level, on the risk of bias, inaccuracy, or fallacies in the functioning of algorithmic profiling.[45]

Naturally, the possible benefits that might derive from the implementation of XAI techniques for risk management purposes cannot supplant the rights and guarantees provided by domestic tax law in the realm of tax controls.[46]

Rather, these techniques may offer a reliable basis to foster AI interpretability, which in turn, is a pre-condition for providing taxpayers with an effective legal remedy in relation to their act of assessment. 

This conclusion is consistent with our previous considerations: the implementation of artificial intelligence in tax audits places a fundamental responsibility on tax officers and, specifically, on the sc. procedure manager, who is required to intervene in the decision-making process and prevent the full automation of tax audits, thus preserving the fundamental roots of taxpayer protection.[47]

 

5. Conclusions

As we have illustrated in this paper, the increasing reliance on automated decision-making by Tax Authorities requires the adoption of specific management techniques – such as, XAI explanations –to understand and interfere with the outcome of algorithmic profiling. 

Naturally, these techniques cannot supplant the fundamental institutions of taxpayer protection; rather, they can support tax officers in complying with the requirements set by the principle of proportionality, and in providing taxpayers with an effective legal remedy against (automated) acts of assessment. 

Although these conclusions may provide insights as to how Tax Authorities should integrate AI systems in the field of taxation, it would be foolish to interpret the progressive automation of law enforcement as a mere legal or technical issue. In fact, the increasing adoption of AI technologies in the public sector raises concerns that extend beyond the perspective that we have adopted in this paper.[48]

More specifically, we should consider whether, in light of the extraordinary and rapid technological developments we have witnessed in the last two decades, it is appropriate to promote, at a legislative level, a “full-field” use of advanced artificial intelligence techniques of “data scraping” to gather citizens’ information on the web. 

And this especially considering that several multinational companies have undertaken multiple initiatives to self-regulate the development and commercialization of AI.[49]

Eventually, the adoption of a “Machiavellian” approach in the implementation of new technologies could lead certain private entities to assume a “quasi-publicistic” role in the regulatory dynamics, and ultimately cause a “privatization” in the protection of our fundamental rights.[50]

 



[1] ECtHR, Malone v The United Kingdom, n. 8691/79, 1984. See L. Seminara, ‘Sorveglianza segreta e nuove tecnologie nel diritto europeo dei diritti umani’, in MediaLaws, 2019, p. 132 ss.

[2] Concurring Opinion of Judge Pettiti, Malone v The United Kingdom, n. 8691/79, 1984.

[3] See. G. Pitruzzella, ‘Dati fiscali e diritti fondamentali’, in Dir. prat. trib., n. 2, 2022, p. 666 ss., and  F. Farri, ‘Digitalizzazione dell’amministrazione finanziaria e diritti dei contribuenti’, in Riv. dir. trib., 2020, p. 120 ss. See also O. Pollicino, G. De Gregorio, ‘Constitutional Law in the Algorithmic Society’, in H. Micklitz et al. (edited by), Constitutional Challenges in the Algorithmic Society, Cambridge University Press, 2021, p. 3 ss.

[4] See also note 31, infra.

[5] The relationship between digitalization and efficient administration was originally addressed in A. Predieri, Gli elaborati elettronici nell’amministrazione dello Stato, Il Mulino, Bologna, 1971, and G. Duni, ‘L’utilizzabilità delle tecniche elettroniche nell’emanazione degli atti e nei procedimenti amministrativi. Spunti per una teoria dell’atto amministrativo elaborato nella forma elettronica’, in Riv. Amm., 1978, p. 407 ss. Still, the current developments in the field of artificial intelligence require a higher degree of awareness, in order to adapt the existing legal framework to the transition from human to automated decision-making. See G. Avanzini, Decisioni amministrative e algoritmi informatici, Editoriale Scientifica, Naples, 2019, p. 81. In other words, it does not suffice for States to invoke the common good, as other elements are needed to legitimize the implementation of large-scale public surveillance (see D. Lyon, The Culture of Surveillance: Watching as a Way of Life, Cambridge, 2018).

[6] F. Paparella, ‘L’ausilio delle tecnologie digitali nella fase di attuazione dei tributi’, in Riv. dir. trib., 2022, p. 617 ss.; M. Logozzo, ‘La centralità della telematica negli adempimenti tributari: la fattura elettronica’, in L. Del Federico, F. Delfini (edited by), La Digital Economy nel sistema tributario italiano ed europeo, Amon, Padua, 2015, p. 119 ss.; R.C. Guerra, ‘L’intelligenza artificiale nel prisma del diritto tributario’, in Dir. prat. trib., 2020, p. 921. Crucially, the reduction in non-digital interactions emphasizes the need to allow taxpayers not to engage in digital interactions against their will. On this point, see A. Contrino, ‘Digitalizzazione dell’amministrazione finanziaria e attuazione del rapporto tributario: questioni aperte e ipotesi di lavoro nella prospettiva dei principi generali’, in Riv. dir. trib., 2023, p. 105 ss.

[7] See A. Contrino, ‘Banche dati tributarie, scambio di informazioni fra autorità fiscali e “protezione dei dati personali”: quali diritti e tutele per i contribuenti?’, in Riv. telem. dir. trib., 2019, p. 7 ss. 

[8] See A. Santoro, ‘Nuove frontiere per l’efficienza dell’amministrazione fiscale: tra analisi del rischio e problemi di privacy’, in G. Arachi, M. Baldini (edited by), La finanza pubblica italiana. Rapporto 2019, Il Mulino, Bologna, 2019, p. 66 ss.; A. Carinci, ‘Fisco e privacy: storia infinita di un apparente ossimoro’, in Il fisco, 2019, p. 4407 ss.; A. Contrino, S.M. Ronco, ‘Prime riflessioni e spunti in tema di protezione dei dati personali in materia tributaria, alla luce della giurisprudenza della Corte di Giustizia e della Corte EDU’, in Dir. prat. trib. int., 2019, p. 599 ss.; M. D’Ambrosio, ‘La protezione dei dati personali alla luce della giurisprudenza CEDU e CGUE: aspetti generali e profili di criticità’, in Dir. prat. trib. int., 2019, p. 970 ss.; G. Salanitro, ‘Amministrazione finanziaria e Intelligenza Artificiale, tra titolarità della funzione e responsabilità’, in S. Aleo (edited by), Evoluzione scientifica e profili di responsabilità, Pacini Editore, Pisa, 2021, p. 414 ss.; G. Palumbo, ‘Contrasto all’evasione fiscale e impatto sulla privacy dei contribuenti’, in G. Palumbo (edited by), Fisco e privacy. Il difficile equilibrio tra lotta all’evasione e tutela dei dati personali, Pacini Editore, Pisa, 2021, p. 13 ss.; S. Dorigo, ‘L’intelligenza artificiale e i suoi usi pratici nel diritto tributario: Amministrazione finanziaria e giudici’, in R.C. Guerra, S. Dorigo (edited by), Fiscalità dell’economia digitale, Pacini Editore, Pisa, 2022, p. 204; A. Tomo, ‘La “forza centripeta” del diritto alla protezione dei dati personali: la Corte di Giustizia sulla rilevanza in ambito tributario dei principi di proporzionalità, accountability e minimizzazione’, in Dir. prat. trib. int., 2022, p. 908 ss.; G. Ziccardi‘Protezione dei dati, lotta all’evasione e tutela dei contribuenti: l’approccio del Garante per la protezione dei dati italiano tra trasparenza, big data e misure di sicurezza’, in G. Ragucci (edited by), Fisco digitale. Cripto-attività, protezione dei dati, controlli algoritmici, Giappichelli, Turin, 2023, p. 61 ss.; G. Consolo, ‘Sul trattamento dei dati personali nell’ambito delle nuove procedure automatizzate per il contrasto dell’evasione fiscale’, in E. Marello, A. Contrino (edited by), La digitalizzazione dell’amministrazione finanziaria tra contrasto all’evasione e tutela dei diritti del contribuente, Giuffrè, Milan, p. 179 ss. 

[9] See F. Pizzetti, ‘La protezione dei dati personali e la sfida dell’intelligenza artificiale’, in F. Pizzetti (edited by), Intelligenza artificiale, protezione dei dati personali e regolazione, Giappichelli, Turin, 2018, p. 5 ss., and A. Pierucci, ‘Elaborazione dei dati e profilazione delle persone’, in V. Cuffaro et al. (edited by), I dati personali nel diritto europeo, Giappichelli, Turin, 2019, pp. 414 ss. and 449 ss.

[10] See C. Francioso, ‘Intelligenza artificiale nell’istruttoria tributaria e nuove esigenze di tutela’, in Rass. trib., n. 1, 2023, p. 47 ss., and G. Consolo, ‘Il nuovo applicativo di analisi del rischio fiscale “Ve.Ra.” nel processo di digitalizzazione dell’attività dell’Amministrazione finanziaria’, in Il diritto dell’economia, n. 112, 2023, p. 153 ss.

[11] Art. 1, paragraph 682 et seq., Law of December 27, 2019, no. 160

[12] See Decreto MEF 28 June 2022, “Attuazione dell’articolo 1, comma 683, della legge 27 dicembre 2019, n. 160, relativo al trattamento dei dati contenuti nell’archivio dei rapporti finanziari”.

[13] To this end, it was pointed out that providing the public with detailed information on the functioning of Ve.Ra. could expose the Tax Authority to “reverse engineering techniques” which could ultimately allow taxpayers to escape surveillance. See C. Francioso, op. cit., p. 65.

[14] Including the Archive of Financial Relationships, the data from E-billing, the data contained in the Public Vehicle Register and in the Airport Districts, the data gathered by Regions and Commons, the data collected by the National Institute of Social Security (INPS), the data collected by insurance companies, and the data held in private databases that are accessible by the A.F.

[15] Decreto MEF 28 June 2022, supra note 13.

[16] G. Ragucci, ‘L’analisi del rischio di evasione in base ai dati dell’archivio dei rapporti con gli intermediari finanziari: prove generali dell’accertamento algoritmico?’, in Riv. tel. dir. trib., 2019; M. Fasola, ‘Le analisi del rischio di evasione tra selezione dei contribuenti da sottoporre a controllo e accertamento “algoritmico”’, in G. Ragucci (edited by), Fisco digitale. Cripto-attività, protezione dei dati, controlli algoritmici, Giappichelli, Turin, 2023, p. 79 ss.; C. Francioso, op. cit., p. 82 ss.

[17] See R. Ursi, ‘La variante digitale della costruzione della decisione amministrativa’, in A. Bartolini et al. (edited by), La legge n. 241 del 1990 trent’anni dopo, Giappichelli, Turin, 2021, p. 405 ss. 

[18] See, inter alia, Cons. Stato, sez. VI, 13 dicembre 2019, n. 8472. L. Musselli, ‘La decisione amministrativa nell’età degli algoritmi: primi spunti’, in MediaLaws, n. 1, 2020, p. 20; A. Simoncini, ‘Profili costituzionali della amministrazione algoritmica’, in Rivista trimestrale  di  Diritto Pubblico,  n. 4,  2019,  p. 1149  ss.; F. Laviola, ‘Algoritmico, troppo algoritmico: decisioni amministrative automatizzate, protezione dei dati personali e tutela delle libertà dei cittadini alla luce della più recente giurisprudenza amministrativa’, in BioLaw Journal, n. 3, 2020, p. 389 ss.

[19] See sec. 3.

[20] See L. Torchia, Lo Stato digitale. Una introduzione, Il Mulino, Bologna, 2023, p. 158.  

[21] See Art. 96(3), second period, of Ley General Tributaria 58/2003: “Además, cuando la Administración tributaria actúe de forma automatizada se garantizará la identificación de los órganos competentes para la programación y supervisión del sistema de información y de los órganos competentes para resolver los recursos que puedan interponerse”.

[22] Italian DPA, Provvedimento 22 December 2021, n. 453, and Provvedimento 30 July 2022, n. 276.

[23] R. Torino, ‘La valutazione d’impatto (Data Protection Impact Assessment)’, in V. Cuffaro et al. (edited by), op. cit., p. 855.

[24] Id., p. 874 ss.

[25] In May 2023, upon request by the Italian DPA, the Revenue Agency has published a part of its Data Protection Impact Assessment (“Documento di Valutazione di Impatto Sulla Protezione dei Dati”), and a document describing the techniques adopted by Ve.Ra. in tax controls. (“Informativa Sulla Logica Sottostante i Modelli di Analisi del Rischio Basati sui Dati dell’Archivio dei Rapporti Finanziari”).

[26] See L. Salvini, La partecipazione del privato all’accertamento (nelle imposte sui redditi e nell’IVA), Cedam, Padua, 1990, p. 84 and F. Gallo, ‘Discrezionalità nell’accertamento e sindacabilità delle scelte d’ufficio’, in Riv. dir. fin., 1992, p. 656. In relation to whether tax risk analysis constitutes an activity with external relevance, or otherwise a matter of internal case management, see G. Vanz, I poteri conoscitivi e di controllo dell’amministrazione finanziaria, Cedam, Padua, p. 232 ss. and G. Fransoni, Le indagini tributarie. Attività e poteri conoscitivi nel diritto tributario, Giappichelli, Turin, 2020, p. 37 ss.

[27] For further remarks in this regard, see G. Consolo, ‘Sul trattamento dei dati personali nell’ambito delle nuove procedure automatizzate per il contrasto dell’evasione fiscale’, in E. Marello, A. Contrino (edited by), op. cit., p. 183 ss.

[28] See, ex multis, A. Sandulli, La proporzionalità dell’azione amministrativa, Cedam, Padua, 1998, and D.U. Galetta, Principio di proporzionalità e sindacato giurisdizionale nel diritto amministrativo, Giuffrè, Milan, 1998. The application of the principle of proportionality in tax controls is extensively addressed in G. Moschetti, Il principio di proporzionalità come “giusta misura” del potere nel diritto tributario, Cedam, Padua, 2017.

[29] V.A. Sandulli, ‘Proporzionalità’, in Cassese (edited by), Dizionario di diritto pubblico, Giuffrè, Milan, 2006, p. 4645 ss.: “The legal notion of proportionality can be defined, to some extent, as a one and triune concept, because while it is possible to decompose it in three foundational elements (namely necessity, suitability, adequacy), these three elements cannot be decomposed, but rather only distinguished, from each other […] These three elements reflect the logical steps that form the object of a judicial proportionality assessment” (Authors’ translation). See also A. Bodrito, ‘Note in tema di proporzionalità e Statuto del contribuente’, in A. Bodrito, A. Contrino, A. Marcheselli (edited by), Consenso, equità e imparzialità nello Statuto del contribuente, Giappichelli, Turin, 2012, p. 286.

[30] The three-steps proportionality assessment drawn by the German scholars has rarely been adopted by the CJEU. Apart from some exceptions (see ECJ C-265/87), the proportionality assessment adopted by the CJEU is generally a two-step judgement, focusing on the notions of “necessity” and “suitability”, or otherwise a proportionality assessment where the logical steps described above follow a different order. See more extensively D.U. Galetta, ‘Il principio di proporzionalità’, in M.A. Sandulli (edited by), Codice dell’azione amministrativa, Giuffrè, Milan, 2017, p. 169 ss.

[31] While on the contrary, the Italian Supreme Courts have frequently adopted the three-step proportionality assessment described above. See Cons. Stato, sec. VI, 8 February 2008, n. 424, and Cons. Stato, sec. VI, 1 April 2000, n. 1885, commented by D.U. Galetta, ‘Una sentenza storica sul principio di proporzionalità con talune ombre in ordine al rinvio pregiudiziale alla Corte di giustizia’, in Riv. It. Dir. Pubbl. Comp., 2000, p. 459.

[32] See Italian DPA, Provv. 30 July 2022, n. 276.

[33] Pseudonymization techniques [see GDPR, art. 4, para. 1(5)] offer a different type of protection compared to anonymization techniques. Essentially, pseudonymization consists of replacing data with other data. By doing so, data processors can only recognize the addressees of data processing by using a specific “key”. Accordingly, pseudonymization techniques do not alter the scope of data processing, as they are ultimately meant to recognize the taxpayers that are addressed by algorithmic tax risk analysis. Therefore, pseudonymization techniques are implemented to guarantee that the data are secure, confidential, and free from manipulation. On the contrary, anonymization techniques ensure that the (anonymized) data are not referrable, in any case, to the addressees of data processing. See L. Tempestini, G. D’Acquisto, ‘Il dato personale oggi tra le sfide dell’anonimizzazione e le tutele rafforzate dai dati sensibili’, in G. Busia et al. (edited by), Le nuove frontiere della privacy nelle tecnologie digitali. Bilanci e prospettive, Aracne, Rome, 2016, p. 85 ss., and S. Calzolaio, ‘Protezione dei dati personali (dir. pubbl.)’, in Digesto Disc. Pubbl., 2017.

[34] In the Provvedimento n. 276/2022, the Italian DPA has welcomed the adoption of specific organizational and technical measures to limit the risks connected with tax risk analysis. Particularly, we refer to the distinction between the personnel in charge of defining the nature and scope of data processing, and those in charge of data pseudonymization. Furthermore, the Revenue Agency has made sure that the possibility to de-pseudonymize data is limited in time and scope. Still, the Italian DPA has highlighted that such measures might not successfully target the risks connected with data processing and, specifically, the risk that taxpayers may be “re-identified” through the adoption of proxy-variables that relate to specific features of individual taxpayers.

[35] Art. 5 of D.M., 28 June 2022.

[36] Italian DPA, Provv. 30 July 2022, n. 276.

[37] ECJ, SCHUFA case [C-634/21].

[38] To this end, the ECJ observes that “as regards [to] the condition that the decision must produce ‘legal effects’ concerning the person at issue or affect him or her ‘similarly significantly’, it is apparent from the very wording of the first question referred that the action of the third party to whom the probability value is transmitted draws ‘strongly’ on that value. Thus, according to the factual findings of the referring court, in the event where a loan application is sent by a consumer to a bank, an insufficient probability value leads, in almost all cases, to the refusal of that bank to grant the loan applied for”.

Accordingly, the Court concludes that “in those circumstances, it must be stated that the third condition to which the application of Article 22(1) of the GDPR is subject is also fulfilled, since a probability value such as that at issue in the main proceedings affects, at the very least, the data subject significantly”.

[39] Id., § 58: “the obligation to provide ‘meaningful information about the logic involved’ must be understood to include sufficiently detailed explanations of the method used to calculate the score and the reasons for a certain result. In general, the controller should provide the data subject with general information, notably on factors taken into account for the decision-making process and on their respective weight on an aggregate level, which is also useful for him or her to challenge any ‘decision’ within the meaning of Art. 22(1) GDPR”.

[40] The relationship between automated decision-making on one hand, and explainability on the other, was first drawn by the referring court (Verwaltungsgericht Wiesbaden), which explicitly asked the ECJ whether arts. 6 and 22 GDPR require that the providers of individual automated decisions comply with specific measures, and the scope to which such measures apply.

[41] See D. Gunning et al., ‘XAI-Explainable artificial intelligence’, in Sci. Robot., n. 4(37), 2019; R. Dwivedi et al., ‘Explainable AI (XAI): core ideas, techniques, and solutions’, in ACM Computing Surveys, 2023, p. 1 ss.; V. Hassija et al., ‘Interpreting black-box models: a review on explainable artificial intelligence’, in Cognitive Computation, n. 16(1), 2024, p. 45 ss.

[42] See C. Zhang et al., ‘Explainable artificial intelligence (XAI) in auditing’, in International Journal of Accounting Information Systems, n. 46, 2022. In this paper, the Authors test the application of several XAI techniques in the field of auditing, highlighting the possible benefits and limitations thereof. 

[43] It was also proposed that “[since] in the case of litigation between a taxpayer and the tax authorities, judges will need information about the relevant AI system, [a] balanced approach in that regard should be able to differentiate between the information received by the court and the taxpayer. […] This differentiation in the scope and content of the explanation of AI systems is justified by the obligation of the court to deliver a judgment backed up by reasonable and persuasive arguments in support thereof. Taxpayers bear no such burden. For them, it is sufficient to understand why the AI system led to outcome X rather than outcome Y (counterfactual) and that such an outcome is or is not predicated on unfair or discriminatory elements”. B. Kuźniacki et al., ‘Towards eXplainable Artificial Intelligence (XAI) in tax law: the need for a minimum legal standard’, in World Tax Journal, n. 14(4), 2022, p. 583.

[44] This concern is also shared by Kuźniacki et al., op. cit., which include “secrecy” as one of the factors that impede the implementation of XAI in the realm of tax law.

[45] Pursuant to a similar approach, the recent AI Act imposes several obligations upon the deployers of AI systems, suggesting a user-empowering or compliance-oriented approach to AI explainability. Specifically, we refer to Recital 47 and art. 13(1), which state that “high-risk AI systems listed in Annex III shall be designed and developed in such a way that their operation is comprehensible by the users”, and to the second part of art. 13, which specifies that “an appropriate type and degree of transparency shall be ensured, with a view to achieving compliance with the relevant obligations of the user and of the provider […]” See F. Sovrano et al., ‘Metrics, Explainability and the European AI Act Proposal’, in Multidisciplinary Scientific Journal, n. 5(1), 2022, p. 126 ss.

[46] To this end, it is worth mentioning that the recent Italian Tax Reform has extended the scope of the right to be heard and receive an explanation upon notification of an act of assessment, which have been described by A. Viotto, ‘Il contraddittorio procedimentale nella legge delega per la riforma fiscale’, in Rivista Diritto Tributario, 2023, and S. Muleo, ‘Il punto su… il nuovo obbligo di motivazione degli atti tributari ovvero dell’impatto delle modifiche di testo e contesto’, in Rivista Diritto Tributario, 2023.

[47] Crucially, human intervention plays a fundamental role in diminishing the burden placed on Tax Authorities in relation to the duty to provide information on automated decision-making pursuant to art. 15(h) GDPR. If, in fact, a highly automated decision calls for a detailed explanation on the reasoning behind it, a “hybrid” decision should require less information on the reasoning behind automated decision-making. Accordingly, human intervention is necessary to conform possible limitations on the scope of art. 15(h) GDPR to the principle of proportionality. 

[48] M. Hildebrandt, S. Gutwirth (eds.), Profiling the European Citizen, Springer, Berlin, 2010.

[49] M. Bassini, ‘Fundamental Rights and Private Enforcement in the Digital Age’, in European Law Journal, n. 25, 2019, p. 182 ss.

[50] It was pointed out that the progressive digitalization of public and private undertakings could lead to a new “systemic” Midde-Age (pp. 255 – 257), as the core aspects of our society – law, economics, politics, and so forth – would ultimately rely on the new common “system”, namely the web, as the place where to address their main issues and challenges. See F. Fracchia, ‘Lo spazio della pubblica amministrazione. Vecchi territori e nuove frontiere. Un quadro d’insieme’, Il diritto dell’economia, 2023, available at https://www.ildirittodelleconomia.it/wp-content/uploads/2023/10/08Fracchia.pdf