Search This Blog

Followers

Tuesday, August 23, 2016

MY LIFE



OCTOBER 2002, THIS MOVIE AND BURGLARY OF MY HOME - THEN HOME INVASION ON 5416 ALICE NE. SAW FACES,
GAVE ' REAL ON DUTY POLICE, CITY EMPLOYEES ' LICENSE PLATE NUMBER,
AFTER CALLING THEM ( POLICE ) AS PER REQUEST OF VICTIM OF HOME INVASION ( 5416 ALICE NE. ) AT
THAT ADDRESS. FOUR MEN DRESSED AS POLICE IN BLACK. BIG ' WHITE LETTERS ' ON BLACK CLOTHING !  ' POLICE.'






https://en.m.wikipedia.org/wiki/The_Sum_of_All_Fears_(film)


http://phaethonstar-soberover.blogspot.com/2016/06/up-dated-6-25-2016-why.html

AND IT NEVER STOPPED, OR.....THIS FROM RANK CORP. AND URL, I KNOW YOUR LAZY...............:

http://onlinelibrary.wiley.com/doi/10.1111/1756-2171.12140/full


AND, I NEVER HAD A HEART ATTACK, THAT LIE WAS MY FATHER'S, HE NEVER FELL FROM THE ROOF, ONLY THE ' CLOUDS;' 7400 FRANKLIN DR.............DATE, I KNOW; DUNBAR. THEN BOOK REGARDING ' ESP ' AND ORDER FOR ' MY SERVICE,' IN THE END, I AM LEFT WITH NOTHING, HALF SISTER'S EVERYTHING ( GERMAN IN COMPARISONS WITH BASQUE ??? ) MAGDALENA ( HALF SISTERS MOTHER ) NOTHING, AND MASONS EVERYTHING AT EXPENSE OF ' ELDER ' BROTHER'S ' RAPE OF ADULT MALES WHEN HE ( I ) WAS ABOUT THREE YEARS OLD, MAYBE EVEN YOUNGER. I GUESS THIS IS HOW IT WORKS; ANOINTMENT'S.

MY INJURY WAS DUE TO A CONCUSSION FROM A GUNSHOT WHILE DEFENDING MY HOME FROM ' CRIMINALS ' IN 2006. AS I SAID FROM THE BEGINNING, ' IN UNIFORM OR NOT.'

JASON ROWDAN WAS APPROACHED BY THE ANGLO SAXON FIREMEN AND POLICE OFFICER ( RANGERS ) AFTER THE HOME INVASION. HIS NEIGHBOR WAS THE VICTIM OF HAVING HIS ICE CHEST OF SIDE ARMS STOLEN FROM HIS FRONT PORCH AFTER SITTING THERE FOR ABOUT A WEEK, DURING HIS MOVE IN. ANOTHER VETERAN AND ' RANGER,' NOW EMT.

OTHER TV CREWS STOPPED BY, OTHER THAN UNIVISION, I WAS NEVER ON CAMERA OR EVEN MY SHADOW. CHOW ! no ? but, police........talk and talk, boyfriends, no !?!
Tu peux enlever tes lunettes, ma chérie., ' FACEBOOK ' oh, '  Tu te comportes comme un parfait imbécile. ! '


anyway......SO, TODAY I SEE A HEART DOCTOR, I GUESS, BEFORE A SURGERY.










Deterring fraud by looking away

Authors

  • I am grateful to Editor Benjamin Hermalin for his detailed suggestions, which allowed me to simplify the model and improve the exposition of the article. I also thank Matti Keloharju, Juuso Välimäki, Mikko Leppämäki, Pauli Murto, Vesa Puttonen, Tuomas Takalo, Sami Torstila, and two anonymous referees for their thoughtful comments.

Abstract

Individuals being audited potentially learn how to exploit the weaknesses inherent in any audit methodology if they face the same method many times. Hence, an auditor better deters fraud by randomizing her choice of methodology over time, thereby frustrating a would-be fraudster's ability to learn. In the extreme, an auditor benefits from refusing to audit, even though audits are costless to her.

1. Introduction

It is difficult, if not impossible, to design an audit technology1 that catches all fraud. That is why institutions inevitably rely on imperfect audit technologies which reduce but do not eliminate, undesired behavior. Moreover, the efficacy of an audit technology diminishes if used repeatedly because would-be fraudsters can learn how to best cheat the technology over time.
Consider health care fraud, which accounted for 1% to 3% of the US federal budget in 2011.2Allowing all reimbursement claims to face the same screening process creates a perfect environment for an opportunistic player to learn the loopholes of the system. Once a medical provider discovers a loophole in the Department of Health and Human Services' screening technology by cheating the system successfully, he can file multiple bogus claims. A recent article inThe Economist (2014) provides several examples of scams fraudsters have been repeatedly using to swindle Medicare and Medicaid programs. Treasury Inspector General for Tax Administration (2012) highlights the repeated nature of tax fraud by citing many incidents where a fraudster used either a single bank account or a single physical address to file hundreds of tax returns. In one case, the Internal Revenue Service (IRS) allowed 2137 returns filed from the same address in Lansing, Michigan. In another case, 590 returns were allowed to go to a single bank account.3
An agent benefits from learning a weakness in the audit technology only if his fraud goes undetected. Many types of financial fraud (e.g., financial statement fraud, money laundering, and unauthorized positions by traders) are undetectable by nature. If a fraud remains undetected, only the fraudster himself knows that a violation has taken place. The principal, on the other hand, remains unaware of her loss due to the fraud and therefore does not make the necessary adjustments to the audit technology to prevent future losses.
The undetectable nature of fraud and the predictability of the principal's strategy benefit the agent by allowing him to learn fully whether he is able or not to commit fraud successfully. The principal can hinder such learning by varying her practices. For example, when the audit technology is changed in each period, the agent does not gain anything by avoiding detection other than his stage payoff. Therefore, he has less incentive to cheat when he anticipates that a different audit technology will be employed in the next period. Even switching to a weaker technology might be useful in improving the principal's payoff. In some situations, the principal is better off by randomly refusing to audit, even if an audit costs nothing. For example, when the actions of the principal are unobservable, stochastic audits (i.e., randomizing the decision to audit) reduce fraud by preventing the agent from learning how to game the audit technology.
Changing the audit technology also decreases the agent's incentive to cheat by increasing his costs. Suppose that the agent has already made an investment to improve his ability to defeat a given technology. When the technology switches, he has to make the same investment once again. That is, switching decreases the agent's return. Another merit of switching is that it enables the principal to set a lower penalty for deterring fraud, which is particularly useful when the principal finds punishing the agent costly.
There is a vast amount of literature on effective audit strategies in a principal-agent framework. All existing audit models share one common feature: they assume the ability of the agent to be fixed. In other words, they assign a certain probability to the event that the undesired action of the agent successfully passes an audit. This assumption is innocent in a static setting but underestimates the ability of the agent in a dynamic environment. If a fraudulent action passes an audit, a rational agent is expected to increase his prior on his ability in the next round. To allow updating, it is necessary to treat the ability of the agent as a random variable, which has different realizations under different audit technologies.
The key novelty in my model is the introduction of uncertainty in the ability of the agent. When this ability is a random variable, the agent learns its realization by cheating. I make this learning opportunity valuable for the agent by constructing a dynamic game. In Section 2, I show that switching audit technologies eliminates learning and deters the agent from cheating. Then, in Section 3, I analyze an extreme environment where there is only one audit technology available for the principal. Although switching is not possible, the principal can still vary her practices by randomizing her audit decision (i.e., by auditing with a positive probability smaller than 1). I demonstrate that the optimal audit policy is stochastic. In Section 4, I derive the optimal switching rule for the principal, who incurs costs for switching. Finally, in Section 5, I discuss additional applications of the model.
Building on Townsend (1979), Mookherjee and Png (1989) also show that a stochastic audit policy dominates a deterministic policy when the principal monitors the agent's private state (e.g., the actual outcome of the project). The optimality of the stochastic policy in their article hinges on the reduction in auditing costs; the optimal audit policy would be deterministic in their model if audits were costless. In my model, on the other hand, audits do not cost anything, but stochastic audits are still optimal because they dissuade the agent from cheating by reducing his incentive to experiment. That is, it is not the cost savings but the dynamic incentives that makes stochastic audits optimal in my model.

2. A basic two-period model

A principal (she) and an agent (he) interact in two periods. At the start of the first period, the agent observes the audit technology in place. Neither he nor the principal know his ability to evade it if he attempts to commit fraud, although it is common knowledge that the probability he is the able type is inline image.
The agent decides whether to attempt fraud or not. If he attempts it and is not detected, his payoff is 1 and the principal's is −1. In this case, the principal learns her loss only when the game ends (i.e., the fraud goes undetected).4 If the agent attempts fraud and is detected, his payoff is inline image and the principal's is 0; moreover, the principal is aware the agent committed fraud and she terminates the relationship (i.e., ends the game resulting in a zero continuation payoff to the principal and the agent). If the agent does not attempt fraud, his and the principal's payoffs are 0 (though she does not learn this until the end of the game).
If the relationship was not terminated, the game moves to the second period. The principal decides whether to keep the incumbent audit technology or change it. In this section, the cost of switching to another audit technology is assumed to be 0. The agent observes the principal's decision (i.e., he knows if he is facing the old technology or a new one) and again decides whether to commit fraud or not. Payoffs are the same as the first period.5 The probabilities are the same if the principal changed the technology, the agent did not attempt fraud in the first period, or both. If the agent committed fraud, then, because it was not detected, he learns that he is the type that can defeat the period-one technology; so if that technology remains in place, his probability of committing fraud without detection is 1.
In a one-period static game, the agent cheats if his expected benefit from cheating (i.e., α) exceeds his expected cost (i.e., inline image). In a dynamic game, however, he needs to take into account the strategy of the principal (i.e., whether the principal employs the same technology in the next period). To see how the strategy of the principal affects the agent's decision, first note that when α is small, the principal eliminates fraud by employing the same technology in both periods. Namely, when
display math
the agent is honest (assuming that he will be honest if he is indifferent), and replacing the technology does not hurt the principal. On the other hand, when α is large enough that
display math
the agent always cheats regardless of whether the audit technology changes. In this case, switching to a new technology decreases the principal's expected loss from 2α to inline image. For the intermediate values of α (i.e., inline image, the agent will be dishonest if he knows that the same audit regime will be employed in both periods, but not if he knows that the principal will shift regimes. That is, switching completely eliminates fraud.6 The following proposition summarizes the main idea of the article.
Proposition 1. The principal is better off switching the audit technology in the second period.
Intuitively, switching is optimal because it eliminates the value of learning for the agent. Absent a switch in the audit technology, by successfully cheating, the agent garners two benefits: the stage payoff and future benefits. Switching eliminates future benefits (i.e., reduces the agent's continuation payoff) and makes cheating less valuable for the agent. Formally, it increases the threshold for α at which the agent starts cheating.
Switching also enables the principal to eliminate fraud with a lesser punishment. Namely, she can reduce the P necessary to serve as a deterrent from inline image to inline image. This feature is especially useful if the principal finds punishing the agent costly due to concerns about false positives or costs of imprisonment. In this article, I assume that P is exogenously determined. Therefore, it is not possible for the principal to set a higher punishment to deter the agent from cheating.

3. Auditing with one technology

In the previous section, switching among different audit technologies was shown to be effective in reducing fraud. In this section, I assume that there is only one audit technology available to the principal. Although switching is not possible anymore, the principal can still reduce the agent's ability to learn how to game the audit technology by hiding the intensity of audits. This can be achieved by utilizing randomized strategies.7
To show how the randomization of audit decisions can reduce fraud, I make two changes in the model: (i) the principal has only one audit technology, and (ii) audits are unobservable (i.e. the agent does not observe the principal's decision). At the start of the first period, the principal decides whether to audit and without observing her decision, the agent decides whether to attempt fraud. The principal commits to auditing for certain in the second period.
Let μ be the probability the principal audits in the first period. After cheating successfully in the first period, the agent updates his ability using Bayes Rule as follows:
display math(1)
For small values of α, the principal eliminates fraud by auditing in both periods. Particularly, when
display math
or, equivalently, inline image, the agent does not have enough incentive to cheat if there is an audit.
For larger values of α (i.e., inline image), the agent always cheats in both periods. In this case, the principal audits in the first period and catches the agent with probability inline image. For the remaining values of α (i.e., inline image), the agent's incentives to cheat in the second period depend on the first-period probability of audit. In particular, the principal deters the agent from cheating in the second period by committing to auditing with probability μ in the first. The agent is deterred from fraud in the second period if he is sufficiently uncertain about his ability to defeat the auditing technology. To wit, his updated belief about his ability is, recall, given by expression (1), so his expected utility from committing fraud in the second period is
display math(2)
If expression (2) is negative, the agent is deterred from committing fraud. The necessary condition to deter fraud can be written as
display math
The principal minimizes her expected loss by setting inline image (i.e., setting the probability of an audit as high as possible, at the same time deterring the agent from cheating in the second period). The principal's expected loss becomes
display math
which is smaller than 2α (i.e., her expected loss if she audits in both periods) for inline image. The analysis thus far is summarized in the following proposition.
Proposition 2. The principal is better off by committing to auditing in the first period with probability μ, where
display math
Proposition 2 summarizes a surprising result: by looking away with some probability in the first period, the principal might deter cheating in the second. When audits are hidden, the agent relies on his inferences of the principal's strategy when he updates his beliefs about how well he can fool the audit system: even if his first attempt at fraud is successful, he remains uncertain as to whether his second attempt will succeed. This can deter him from cheating. For example, if inline image and inline image, the principal can deter fraud in the second period by setting inline image. Her expected loss in this case is 0.75. If she erred by setting inline image, her expected loss would increase to 1.
It is useful to compare this result with the one in Mookherjee and Png (1989), where the optimal audit policy is also stochastic. In their model, an audit perfectly reveals fraud. Therefore, the agent does not cheat when he believes that the principal will audit. By committing to audit with some probability, the principal prevents the agent from cheating, and at the same time reduces audit costs. In my model, on the other hand, audits do not cost anything, but stochastic audits are still optimal because they reduce the value of learning.
To randomize her audit decision, the principal needs commitment in both Proposition 2 and in Mookherjee and Png (1989). The nature of the commitment, however, is fundamentally different in the two cases. The principal in Mookherjee and Png (1989) would like to commit to auditing because once the agent has been induced to be honest, the principal has no incentive to audit. In Proposition 2, on the other hand, the principal would like to commit to not auditing because once the agent has been induced to be honest in the second period, the principal has the incentive to audit in the first. Absent commitment power, the principal audits in both periods. Figure 1 shows that the commitment improves the principal's payoff for inline image.
Figure 1.
THE SOLID LINE REPRESENTS THE PRINCIPAL'S EXPECTED LOSS UNDER THE OPTIMAL AUDIT POLICY DERIVED IN PROPOSITION 2. THE DASHED LINE SHOWS HER EXPECTED LOSS WHEN SHE DOES NOT HAVE COMMITMENT POWER. IN THAT CASE, SHE AUDITS IN BOTH PERIODS.
The need for commitment, though, does not necessarily imply that auditing in both periods is the optimal audit policy in every environment where the principal does not have commitment power. To understand how the principal might benefit from not auditing—despite the fact that audits are costless—when she cannot commit to a particular strategy, assume that the principal's actions are observable. If the principal does not employ an audit technology in the first period, she offers the agent an opportunity to cheat but at the same time prevents him from learning about any possible weakness in her technology. For inline image and inline image, the value of preventing the agent from learning whether he can defeat the technology or not is greater than the principal's loss in the first period (i.e., 1). Therefore, by looking away in the first period, the principal decreases her expected loss from 2α to 1.

4. An infinite-horizon model with switching costs

In this section, I analyze an infinite-horizon version of the audit game with some extensions. To ensure the convergence of expected payoffs, I assume that the principal and the agent discount future cash flows with discount factor inline image in each period.
I first extend the model in Section 'A basic two-period model' by allowing the agent to improve his ability at some cost.8 At the start of each period, the agent observes the audit technology in place and makes an investment which determines his ability (i.e., the probability he defeats the incumbent audit technology). More specifically, if he invests inline image and attempts fraud, then he defeats the technology with probability a. For the rest of this section, a will remain the agent's choice variable. I consider only pure strategies for the agent. The cost of investment inline image is an increasing and convex function. I also assume that inline imageinline image, and I(1) is high enough for the agent's problem to have an interior solution. If the agent attempts fraud and is not caught, his ability to defeat the same technology improves to probability 1. That is, he does not have to make an investment again to defeat the same technology.
In a one-period game, the agent's payoff for being dishonest becomes inline image. Denote by inline image the agent's optimal choice in a one-period game. By the assumed convexity of inline image, the first-order condition inline image is necessary and sufficient for optimality. This myopically optimal level will be a useful point of comparison in the analysis to follow.
If the principal uses the same technology in every period, the agent's expected discounted payoff inline imagefor being dishonest will be
display math(3)
The first-order condition gives
display math(4)
where inline image is the agent's optimal choice. The term inline image is a global maximum, because the second derivative of inline image with respect to a is strictly negative (i.e., inline image).
When inline image, the agent cheats in the first period and then continues to cheat if successful. The principal's expected loss is the sum of transfers to the agent:
display math
If the principal switches technologies, the agent must invest again. The game is stationary, so if the agent wishes to commit fraud in one period, he will wish to do so in all. His lifetime expected discounted payoff inline image from fraud is, thus,
display math
Hence,
display math(5)
Let inline image be the parameter that maximizes the function above. The first-order condition is
display math(6)
The following lemma shows that the second-order condition is satisfied by proving that the second derivative of inline image, with respect to a evaluated at inline image, is negative. Proofs of all lemmas are in the Appendix.
Lemma 1. Define the function inline image, where inline image is twice differentiable and stricly concave and inline image is decreasing and affine, where χ is an interval in inline image. Then, if inline image satisfies inline image is a global maximum.
Observe that expression (5) can be written as inline image, which implies that inline image is nonpositive if and only ifinline image is nonpositive. That is, if the agent does not cheat in a one-period game, he also does not cheat in an infinite-horizon game where the principal switches technologies in every period.
When the agent's stage payoff for cheating is positive (i.e., inline image), the agent cheats in every period. The principal's expected loss becomes
display math
Clearly, inline image. That is, shifting audit regimes reduces the agent's investment and thus improves the payoff of the principal (i.e., inline image).
Note that when the principal never switches, the minimum amount of penalty required to eliminate fraud can be calculated by setting equation (3) evaluated at the optimum to 0 and solving for P:
display math
Because inline image is the agent's optimal choice, the following inequality holds for all inline image:
display math
When the principal switches in every period, the right-hand side of the inequality denotes the minimum amount of penalty required to eliminate fraud for inline image. It can be derived by setting equation (5) evaluated at the optimum to 0. Therefore, the amount of penalty needed to eliminate fraud is less when the principal shifts regimes. As mentioned in the previous section, switching is a substitute for a higher punishment.
Once again confronted with the optimality of switching, I analyze the optimal switching rule by introducing a switching cost, inline image, that the principal incurs when adopting a new audit technology. The condition inline image implies that whenever the principal believes that the audit technology has become useless, she strictly prefers changing it to letting the agent cheat with success probability 1. I assume, without loss of generality, that the principal must have an audit technology to operate at all. Thus, she pays c at the beginning of the game to employ the first audit technology.
To make the analysis meaningful, I also assume that switching in every period deters the agent from cheating. This assumption implies that the agent does not have the incentive to cheat in a one-period game (i.e., inline image). Otherwise, the agent's and the principal's optimal strategies would be trivial: the agent would cheat in every period, and the principal would switch in every period to reduce her expected loss due to fraud. In this case, the payoff of the principal, inline image, would include both switching costs and her expected transfer to the agent:
display math
where inline image can be calculated from equation (6).
The principal commits to a history-dependent switching policy. More specifically, at the beginning of every period, the principal commits to switching the technology with probability inline image, where t denotes the number of periods the current audit technology has been previously employed. For example, ϕ2is the probability that the principal switches to a new technology after employing the same technology twice. Let inline image define the entire strategy of the principal.
The following lemma summarizes an important result, which will be useful in the analysis to follow.
Lemma 2. There is no fraud in equilibrium.
Therefore the principal's expected loss, L, in equilibrium, consists only of the switching costs:
display math
Note that the principal's expected loss decreases as the principal switches less frequently (i.e., the partial derivative of L with respect to inline image is positive).
The incentive compatibility conditions of the agent (i.e., conditions to ensure that the agent does not commit fraud) are derived in the following lemma.
Lemma 3. The incentive compatibility constraints of the agent can be written as
display math
To understand the intuition for the expression in Lemma 3, suppose that in period τ the agent knows the incumbent technology, that it has been used for inline image periods, and will be used in the current period (which represents its tth period of use). Assume that the agent has not yet sought to commit fraud. The expression in Lemma 3 gives the agent's expected payoff if he commits fraud in period τ, which requires him to sink inline image in period τ.
The principal's program is shown below:
display math
where
display math
The following lemma is helpful in solving the principal's problem (i.e., deriving the optimal switching rule).
Lemma 4. The incentive compatibility constraint of the agent for inline image is binding in equilibrium.
Using the fact that the agent's incentive compatibility constraint for inline image is binding, I obtain
display math(7)
where inline image is the value of a that maximizes the agent's payoff (i.e., sets it to 0) for inline image. The first-order condition gives
display math(8)
Now I can substitute inline image from (7) into (8) and solve for inline image to get
display math(9)
Observe that inline image pins down the value of inline image in equation (7):
display math(10)
The following lemma uses this observation and calculates the principal's expected loss in equilibrium.
Lemma 5. The principal's expected loss in equilibrium is
display math
where inline image can be obtained from (9).
Because the principal's expected loss is determined by inline image, any switching policy that satisfies the incentive compatibility constraints in Lemma 3 and equation (8) is optimal. The analysis so far is summarized in the following proposition.
Proposition 3. The optimal switching rule inline image satisfies the incentive compatibility constraints in Lemma 3 and can be obtained from the expression below.
display math
The optimal switching rule makes the agent indifferent in the first period. If the technology is not switched, the optimal rule continues to induce the agent to be honest. One implication of Proposition 3 is that the principal cannot eliminate fraud by switching with a smaller probability in every period. Because the agent is indifferent in the first period, the incentive compatibility constraint for the second period would be violated if the principal commits to switch the technology in monotonically decreasing probabilities. Therefore, it cannot be the case that inline image for all inline image such that inline image.
The optimal switching rule is not unique. Several strategies can deter fraud, make the incentive compatibility condition binding in the first period, and have equal cost to the principal. For instance, if
display math
in the one-period game, the agent does not have the incentive to cheat because inline image is negative. In the infinite-horizon game, the agent cheats if the principal never switches the audit technology (i.e., by using equations (3) and (4), we have inline image, which is positive). Using equation (7) and the optimal switching rule derived in Proposition 3, one gets
display math
Now compare the strategy inline image for every t (i.e., switch to a new technology every period with probability 0.25) to the strategy inline image, and inline image (i.e., use the new technology twice, then change it with probability 0.37, and switch to a new one if it has been used three times previously). The first strategy binds the incentive compatibility conditions for every t. The second strategy binds it only for inline image (i.e., for inline image and inline image, the incentive compatibility conditions are slack9). The discounted sum of switching costs of the principal if she employs the first strategy is
display math
Her expected loss becomes
display math
if she uses the second strategy. Therefore, both strategies deter fraud and cost the principal the same.
Note that the principal's strategy is not time consistent: she does not have the incentive to switch when the agent has been honest. Therefore, without commitment, only a mixed-strategy equilibrium (i.e., the agent cheats with some probability, and the principal switches with some probability) can be sustained, which implies that absent commitment, the principal cannot eliminate fraud.

5. Discussion

When a fraudster is allowed to face the same audit technology many times, he can explore its loopholes and use that information to cheat repeatedly. I have built a model on this insight and shown that an auditor can limit a fraudster's learning by randomizing her strategy over different audit technologies (e.g., by utilizing different auditors).
Randomized audits are employed in many environments. For example, the Transportation Security Administration uses random security checks at airports.10 The obvious justification for randomization is cost savings. It is too costly to check every passenger at the airport. I argue that randomizing security checks at airports has another benefit: it reduces undetected security breaches (e.g., weapon smuggling) by making it more difficult for terrorists to learn the security procedures.
The conclusion regarding the merit of randomization can be extended to many other principal-agent frameworks. Consider a Chief Executive Officer (CEO) monitored by the same board of directors over several years. By observing the board members' reactions to his initiatives, the CEO can discover deficiencies in their monitoring. For example, he can learn how to design and present proposals (which are not necessarily in line with shareholders' objectives) in such a way that they have a greater likelihood of being approved. Frequent renewal of board members limits such learning.
Companies audited by the same firm over time might also learn how to manipulate their financial statements without being caught. Rotating audit firms reduces such an opportunity. In fact, accounting scholars and practitioners have extensively analyzed and debated whether auditor tenure improves audit quality.11 Proponents of mandatory rotation believe that setting a limit on auditor tenure increases auditor independence and improves objectivity, because auditors are less likely to form relationships with their clients if they have a shorter tenure. Opponents of mandatory rotation, on the other hand, argue that mandatory rotation increases audit costs and wastes the knowledge that the auditor has accumulated over time. My article contributes to this debate by providing a new rationale for why auditors should be rotated: because firms could otherwise learn how to successfully cheat over time.
In addition to external auditors, firms also rely on internal auditors to detect fraud. To analyze whether external or internal auditing is more desirable, Kofman and Lawarree (1993) establish a trade-off between competency and objectivity: internal auditors have more information about the operations of their firm and can produce higher-quality reports, but they are also more prone to collusion with the management. Assuming that internal auditors rarely change their auditing methods, whereas different external audit firms use different audit technologies, my model suggests that the possibility of rotation makes external auditing more effective than internal auditing in preventing fraud.
In some environments, it is possible for the principal to make it more difficult for the agent to cheat by improving her audit technology. For example, by offering higher salaries, shareholders might attract more talented individuals to the board and monitor the CEO more closely. Notice that switching audit technologies and improving a given audit technology are substitutes. Switching reduces the value of learning, and improvements increase the cost of cheating. They both decrease the agent's incentive to cheat.
As switching costs increase, the principal prefers to make investments to have one “heavily armored” technology instead of relying on many “lightly armored” technologies. For example, if the cost of changing the existing audit firm (i.e., the loss of the accumulated knowledge) outweighs the benefits of rotation, it might be more desirable to allocate a bigger budget to have a longer and a more comprehensive audit than to hire a new audit firm. Likewise, as the cost of the investment needed to improve the quality of an audit increases, the relative benefit of randomization also increases. For example, the IRS can catch more fraudulent tax refund applications by setting tighter filters in its screening device. However, tighter filters also impose a higher cost, as more legitimate applications will be withheld. Assuming that delaying legitimate payments is too costly, it might be cheaper for the IRS to reduce fraud by randomizing the intensity of its screening instead of setting more filters in its screening device.

Appendix





This Appendix contains the proofs of the lemmas in Section 'An infinite-horizon model with switching costs'.
Proof of Lemma 1. Proof. Observe
display math
Hence, inline image implies
display math(A1)
Observe
display math
where the second equality follows from (A1) and the fact that g is affine; the inequality follows because f is strictly concave and inline image for inline image.
The analysis so far proves that inline image is a local maximum and is not a local minimum. Because a function cannot have two local interior maxima without an intervening local minimum, if an interior local maximum exists, then it is a global maximum.□
Proof of Lemma 2. Proof. Suppose there were an equilibrium with fraud. Let t denote the first period in which the agent's strategy has him commit fraud. There will be some audit technology in place in that period. Suppose the principal deviates to a strategy that has her change the audit technology starting with the following period inline image. Because the agent knows inline image (the principal, recall, commits to it at the beginning of the game), his expected payoff in period t is
display math(A2)
if he commits fraud, where inline image is his discounted expected continuation payoff if he survives to the next period. Because the principal changes technology in period inline imageinline image is independent of the agent's investment in period t. If the agent does not commit fraud, he survives to the next period with certainty, so his discounted expected continuation payoff is inline image. The necessary condition for inline imageto be smaller than expression (A2) is
display math
but that expression cannot exceed inline image, which is nonpositive by assumption. Hence, were the principal to deviate in this fashion, the agent would not commit fraud in period t. Would the principal wish to deviate? Yes, because she reduces her expected loss in period t from inline image to 0, where inline image is the action the agent would have taken under the principal's original strategy.12 Because inline image, the principal's expected discounted payoff is greater deviating in this fashion. By contradiction, it follows there is no equilibrium in which fraud is committed.□
Proof of Lemma 3. Proof. The agent can guarantee 0 payoff by not cheating. Therefore, he cheats at inline image if and only if his expected payoff is higher than 0. If this is the case, because the game is stationary, he will continue cheating if successful, irrespective of the technology. His expected payoff, inline image, at inline image is13:
display math
Because the denominator of the expression above is positive, inline image is nonpositive (i.e., the incentive compatibility condition for inline image is satisfied) if
display math(A3)
As long as the constraint above is satisfied, the agent who cheated successfully will continue cheating if and only if the principal continues to use the incumbent technology.
To derive the incentive compatibility conditions for inline image, suppose that in period τ the agent knows the incumbent technology, that it has been used for inline image periods, and will be used in the current period (which represents its tth period of use). Assume that the agent has not yet sought to commit fraud. If he commits fraud in period τ, which requires him to sink inline image in period τ, his expected payoff will be
display math
The incentive compatibility conditions are now easy to derive:
display math(A4)
Note that for inline image, constraint (A4) becomes (A3).□
Proof of Lemma 4. Proof. Suppose that the incentive compatibility constraint for inline image is not binding. Then, because the derivative of L with respect to ϕ1 is positive, it would be possible to decrease the principal's switching costs by lowering ϕ1. Lowering ϕ1 does not effect constraints for inline image because ϕ1 does not appear in those constraints. If the constraint does not bind at inline image, then one can lower ϕ2 until it reaches 0. If the constraint still does not bind, the same procedure can be repeated for ϕ3, then for ϕ4, and so on.
To prove that this method works, I need to show that if the constraint for inline image is not binding at inline image, the constraint for inline image is not binding, either (i.e., it is possible to lower inline image, assuming that it is not 0, when inline image is 0).
Consider the constraints for inline image and inline image:
display math
Suppose that the first constraint (i.e., constraint for inline image) is not binding. Then the second constraint is not binding if
display math(A5)
At inline image, condition (A5) can be written as
display math
and after rearranging, it becomes
display math
which holds for every inline image. Therefore, it is possible to lower inline image when inline image is 0.□
Proof of Lemma 5. Proof. Recall that the principal's expected loss, L, in equilibrium, consists only of the switching costs:
display math
where
display math
Observe
display math(A6)
Observe, as well, that
display math(A7)
Using (A6) and (A7), I obtain
display math
Thus, the principal's expected loss can be written as
display math
Finally, substituting inline image from (10) into the equation above gives
display math




  1. 1
    I use the terms “auditing,” “screening,” and “monitoring” interchangeably when referring to activities aimed at detecting undesired action.
  2. 2
    Berwick and Hackbarth (2012) estimate the cost of health care fraud to the federal budget through Medicare and Medicaid programs to be somewhere between $30 and $98 billion. The total cost of fraud within the US health care system is estimated to be between $82 and $272 billion.
  3. 3
    Learning opportunities may also have played an important role in the recent financial crisis. For example, inThe Big Short, Lewis (2011) describes how investment bankers learned how to manipulate the perceived quality of mortgage portfolios after discovering loopholes in the models rating agencies used to rate them.
  4. 4
    If the principal interacts with many agents, this assumption amounts to making total losses (but not the loss from an individual agent) observable to the principal. For example, the IRS can estimate its total loss due to tax fraud, but it cannot identify the taxpayers who committed the fraud.
  5. 5
    When the number of periods is finite, I dispense with financial discounting to avoid notational clutter. The corresponding analysis should be understood to apply to sufficiently patient principal and agents. Discounting is introduced in Section 4.
  6. 6
    Switching is a subgame perfect strategy because it does not cost the principal anything. Also note that the optimality of switching does not depend on the observability of the principal's actions. Switching would be optimal even if the principal's actions were unobservable.
  7. 7
    If the principal interacts with a large number of agents, randomization can be achieved by auditing (at random) a fixed fraction of agents.
  8. 8
    Experimentation can be considered as a form of investment. For example, by cheating a small amount at first, a fraudster can have a better idea about his chances of defeating a given technology. The more he experiments, the more he learns.
  9. 9
    The incentive compatibility conditions for inline image are irrelevant because the technology is switched with probability 1 once it is used in three periods (i.e., inline image).
  10. 10
  11. 11
    See Stefaniak, Robertson, and Houston (2009) for a review of auditor rotation literature. Another useful reference is the detailed report prepared by the Government Accountability Office (2003) to Congress on the potential effects of mandatory rotation of audit firms.
  12. 12
    If the agent is willing to cheat, then his marginal return must exceed inline image, which in turn entails inline image.
  13. 13
    I am using the convention that inline image.





No comments: