Abstract
Greene argues that evolved, fast moral intuitions that sustain cooperation within small “tribes” become maladaptive in large‑scale, intergroup contexts, creating a “tragedy of commonsense morality” that hampers solutions to global issues like climate change and poverty.
He proposes a dual‑process model—automatic, emotion‑driven judgments versus slow, deliberative reasoning—and advocates using the latter to build an impartial “metamorality” that treats welfare maximisation as a common currency for resolving moral conflicts. The solution calls for pragmatic institutions and decision procedures that apply this welfare‑based metamorality while accommodating deep‑seated moral intuitions to ensure public legitimacy and cooperation.
Context
Moral Tribes (2013) participates in the experimental philosophy movement that uses empirical data (surveys, neuroscience) to inform philosophical questions. It responds to moral problems of the 21st. century, such as global poverty, climate change and nuclear risk, where traditional tribal moral psychology is inadequate.
Philosophical context
The book also relates to philosophical traditions which include Hume, Kant and the Utilitarians.
David Hume’s insistence that moral judgments spring from sentiment, not reason, is an intellectual ancestor to Joshua Greene’s psychological account of morality. Inspired by Illustration thinking Hume wrote:
“Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them”.
This prefigures Greene’s emphasis on fast, affective moral intuitions. Greene’s dual‑process model (distinguishing automatic, emotion‑driven responses from slower, deliberative reasoning) echoes Hume’s claim that affective responses furnish moral motivation while reason supplies means‑end calculations and factual guidance.
Hume’s view that sympathy and social feeling shape moral approval also resonates in Greene’s work on how evolved emotional responses support everyday moral judgments. Emphasising communal sentiments and conventions Hume wrote:
“the ... approbation of mankind, or of the world, is the sole source of the moral good,”
Similarly, Greene documents how culturally ingrained emotional reactions (e.g., disgust at harm to individuals) drive much of ordinary moral cognition and are reinforced by social contexts. Greene’s empirical demonstrations that people’s judgments shift when prompted to think consequentially can be seen as a modern, experimental counterpart to Hume’s account of how reflection and sympathy modify sentiment‑based approvals.
Where Greene departs is on norms. Hume was wary of deriving moral systems from abstract reason alone and focusing on the stabilising role of custom and sympathy. Greene accepts Hume’s psychological primacy but argues that reason can and should be used to construct an impartial “metamorality” prioritising general well‑being. Greene summarises this utilitarian outlook when he writes in Moral Tribes that
“our best moral thinking must be founded on the impartial promotion of the greater good,”
This takes Hume’s descriptive claim about sentiment as a starting point for a prescriptive project that uses deliberative reasoning to correct parochial intuitions.
Hume supplies the philosophical groundwork — sentiment as the engine of moral judgment, the limits of reason as motivator and the social character of moral approval. Greene operationalises and tests that with contemporary psychology. Greene’s work reads as Humean in its anthropology, but he revises Hume’s scepticism on reason’s normative power. Greene accepts the primacy of feeling while assigning reason a corrective, system‑building role that Hume himself treated with far more caution.
Immanuel Kant’s moral philosophy influences Joshua Greene chiefly as an intellectual foil. Kant’s emphasis on duty, universal maxims, and the intrinsic moral worth of people contrasts with Greene’s judgement of actions by their outcomes and helps shape Greene’s argument about why affective intuitions can mislead and when impartial reasoning should intervene.
Kant’s central maxim:
“Act only according to that maxim whereby you can at the same time will that it should become a universal law”,
provides the archetype of a rule‑driven ethics that Greene repeatedly sets against utilitarian thought in order to clarify what reason and impartiality demand.
Kant maintains a strict prohibition on treating persons merely as means:
“So act that you treat humanity, whether in your own person or in that of another, always as an end and never as a means only”.
This casts the moral conflicts Greene investigates (e.g., sacrificial trolley dilemmas) into sharp relief. Greene’s empirical findings — that people’s intuitive responses often reject sacrificial harm even when it would maximise overall welfare. They are framed as cases where duty intuitions (Kantian in spirit) driven by emotional aversion conflict with consequentialist calculations arrived at through reflective reasoning. Greene uses this contrast to argue that intuition can outvote impartial moral reasons, but that reason can and should correct intuitions when they yield outcomes that conflict with impartial welfare considerations.
Kant also influences Greene indirectly by prompting analysis of moral motivation and the authority of reason. Kant famously asserts that moral worth depends on acting from duty and respect for the moral law, independent of inclinations. Greene, by contrast, emphasises that moral motivation is largely affective and that reason often functions instrumentally. He thereby challenges Kant’s claim that pure practical reason can supply motivational force independent of sentiment. Greene’s position — connecting normative claims to psychological facts — treats Kantian duty as a target. It is a plausible, principled system that nonetheless may be psychologically infeasible or dangerously parochial when used as public policy without consideration of consequences.
Kant’s rigour and demand for impartiality (an impartial law applicable to all rational agents) actually press Greene to refine his own normative proposals. Greene’s call for an impartial “metamorality” that coherently aggregates reasons across individuals echoes Kant’s universalism in aiming at consistency and impartiality. However, it substitutes utilitarian general welfare for the Kantian categorical imperative as the reference. In this way Kant shapes Greene’s project by supplying a benchmark of duty. Greene must show, against Kantian intuitions and principles, why the morality of an action judged solely by its consequences better resolves moral conflict in pluralistic societies.
Utilitarianism shapes Joshua Greene’s moral philosophy both as inspiration and as the normative goal he endorses. Greene adopts utilitarianism’s core idea, that morality should aim at maximising overall well‑being, and frames his project as finding a psychologically realistic route to that impartial aim. As he puts it in Moral Tribes:
“Our best moral thinking must be founded on the impartial promotion of the greater good”.
This formulation explicitly echoes classical utilitarian commitments to general welfare and impartiality.
Greene uses empirical psychology to explain why ordinary moral intuitions resist utilitarian conclusions and to argue that reasoned deliberation can correct those intuitions. He treats responses such as reluctance to endorse sacrificial harm as affective phenomena to be studied rather than insurmountable moral constraints:
“our gut reactions were not designed to form a coherent moral philosophy. Thus, any truly coherent philosophy is bound to offend us.”
That claim reframes utilitarian prescriptions as demanding a cultivated impartiality that overrides parochial, emotion‑driven judgments.
At the same time Greene modifies utilitarianism to accommodate psychological and institutional realities: he stresses pragmatic implementation, use of rules and norms that track sensitivity to motivation, feasibility and consequentialist aims, that is the consequences of one's conduct are the ultimate basis for judgement about the rightness or wrongness of that conduct. He argues for a “metamorality” grounded in consequentialist reasoning while recognising human limitations, an approach captured when he says we should use reason to build moral systems that promote cooperation across groups, even if such systems contradict our evolved intuitions. In this way Greene inherits utilitarian ends but recasts the means in light of modern empirical findings.
Commentary
Introduction: The tragedy of commonsense morality
Greene opens by diagnosing a modern moral problem he calls the "tragedy of commonsense morality". He argues that our intuitive moral systems, what he calls commonsense moralities, work well inside small groups but clash destructively when different groups interact, producing a new kind of tragedy distinct from the economic 'Tragedy of the Commons' (where individuals, acting in their own self-interest, overuse and deplete a shared resource, ultimately harming the collective good).
He illustrates this with a parable about herding tribes that adopt different rules for shared pasture. Each rule keeps "cooperation within the tribe" but causes conflict when groups meet on the same land, because each tribe treats its rule as "self-evidently correct".
Greene identifies two coordination problems at the heart of moral conflict. First is the individual-level problem, like the Prisoner’s Dilemma, (each prisoner can either cooperate for mutual benefit or betray their partner for individual gain), where "self-interest undermines cooperation" inside a shared system. Second is the intergroup problem — an "Us vs. Them" clash — where different groups follow moral codes that are "mutually irreconcilable" despite seeming obvious to insiders.
Drawing on evolutionary psychology, Greene contends that moral intuitions are "fast, emotionally driven" mechanisms shaped to bind small groups: they promote cooperation and punish outsiders, but are poorly suited to impersonal, global trade-offs. These intuitions therefore mislead us in modern contexts where "distant strangers" and large-scale problems dominate.
As a result, commonsense moralities fuel intractable conflicts — over religion, politics, and national interest — because each side treats its intuitions as "self-evident" and demonises the other. Greene argues this is dangerous for addressing global challenges like climate change, pandemics, nuclear risk, and extreme poverty, which require resolving trade-offs across diverse peoples.
To address the tragedy, Greene motivates the search for a "metamorality": a decision procedure or common currency able to translate different moral views into comparable terms so large-scale cooperation and fair resolutions become possible. He suggests a shift toward a more outcome-focused framework (later defended as a form of utilitarianism) to mediate between competing moral systems.
Greene frames the book's project as diagnosing why evolved moral intuitions lead to intergroup tragedies while motivating a rational, comparative metamorality to resolve large-scale moral conflicts.
I. Moral problems
The tragedy of the commons
Greene uses the tragedy of the commons to illustrate how evolved, tribal moral instincts that work within small groups break down in large‑scale collective problems. Individual members of a group face incentives to maximise short‑term personal gain from a shared resource, while the cost of overuse is diffused across the group. This "rational" behaviour by each person yields collectively disastrous outcomes.
Because our moral intuitions evolved to solve cooperation in small, interdependent groups, they emphasise loyalty, reciprocity, and punishment. These are mechanisms that enforce restraint locally but do not automatically scale to anonymous, large‑scale commons (like the global atmosphere or fisheries). Thus commons dilemmas produce persistent conflict between what feels morally right for one’s tribe and what is required for the common good.
Greene stresses that intuitive moral responses alone cannot reliably solve such social dilemmas. We need deliberative "moral machinery" such as institutions, rules, and impartial reasoning (a metamorality) to realign individual incentives with collective welfare through mechanisms like regulation, property rights, tradable permits, taxes, or cooperative agreements.
Moral machinery
"Moral machinery" denotes the psychological and social systems that generate, propagate, and enforce our moral judgments and behaviours. Greene describes a dual‑process moral psychology: fast, automatic moral intuitions shaped by evolutionary pressures and cultural learning, and a slower, controlled form of reasoning capable of impartial cost–benefit calculations.
Intuitions are "point‑and‑shoot" responses, emotionally charged, context‑sensitive, and tuned to solve cooperation within small groups by privileging loyalty, authority, and purity. These quick responses work well for everyday social life but produce judgments based on duty that resist tradeoffs (for example, forbidding instrumental harm even when it would save more lives).
Reasoning is a corrective tool: deliberative, effortful, and able to take the perspective of impartial welfare. Greene argues reasoning can override or revise intuitions when problems scale beyond tribal settings (e.g., global coordination dilemmas, climate change, or public policy) where impartial consequences matter more than local allegiances.
Cultural institutions and practices serve as external "machinery" that stabilises cooperation: norms, rules, monitoring, sanctions, property rights, and markets convert fragile psychological tendencies into reliable collective outcomes. But when tribes with different moral intuitions interact, their internal moral machinery can clash, producing persistent disagreement and coordination failures.
To resolve intergroup moral conflict, Greene proposes a metamorality: a public decision procedure that adjudicates disputes between competing moral codes. He leans toward a pragmatic, consequentialist metamorality, one that uses reasoned, impartial welfare tradeoffs, to supplement or override tribal intuitions for large‑scale moral problems.
Strife on the new pastures
This chapter describes Greene's parable about multiple groups arriving at a shared, limited resource: pastures. Tribe members each follow internally coherent moral codes which coordinate behaviour within tribes but clash across tribes because there is no agreed procedure for resolving conflicting demands.
Within each tribe, norms, reputational incentives, and punishments keep cooperation stable. When tribes interact over the new pastures, however, their different intuitions about fairness, rights, and permissible actions produce incompatible expectations and mutual distrust, turning potential cooperation into conflict.
Greene uses the parable to show that moral rules tuned for small, familiar groups fail to scale. What feels morally right inside one tribe can cause harm when applied across groups, creating social dilemmas and competitive escalation.
The lesson is that commonsense, intuition‑driven morality cannot reliably adjudicate intergroup disputes. To avoid destructive stalemates, Greene argues for a neutral metamorality, an impartial decision procedure and institutional design (rules, incentives, or consequentialist reasoning) that can align interests and resolve conflicts between tribes.
II. Morality: fast and slow
Trolleyology
Greene uses the trolley‑problem thought experiments to expose a robust conflict between quick, emotion‑driven moral intuitions and slow, deliberative moral reasoning. This is a thought experiment with the option either to do nothing — in which case several people will be killed — or to intervene and sacrifice one person to save the others.
Greene shows that people reliably make different judgments depending on psychological features of harm. Direct personal force, proximity, and emotional salience trigger strong prohibitions, while more impersonal means elicit utilitarian cost‑benefit reasoning.
He presents neuroscientific and behavioural evidence for two interacting systems: an automatic "point‑and‑shoot" moral faculty that issues immediate, affective judgments; a controlled "manual" faculty that can perform impartial welfare calculations when deliberation is engaged. The trolley cases demonstrate how these systems produce systematic divergences that ordinary moral intuition cannot easily reconcile.
Greene argues these divergences matter for real‑world ethics: intuition‑based, tribal morality resists policies requiring instrumental harm or impartial tradeoffs, such as some public‑health triage, redistribution, or collective‑action solutions, because those policies conflict with evolved emotional responses. He therefore endorses using deliberate, publicly justifiable decision procedures (a metamorality leaning toward impartial consequentialist reasoning) to adjudicate disputes and guide policy when intuitive judgments lead to intergroup conflict or suboptimal large‑scale outcomes.
Efficiency, flexibility, and the dual-process brain
Greene explains moral cognition as a product of tradeoffs between efficiency and flexibility, implemented by a dual‑process brain. The intuitive fast system produces efficient, automatic moral answers that solve frequent social problems quickly and with little cognitive cost. These intuitions are shaped by evolution and culture to enforce cooperation in small groups through rules, emotions, and moral decision-making.
By contrast, the rational, slow system is cognitively costly but flexible. Deliberate reasoning can override automatic responses, perform abstract cost‑benefit calculations, and apply impartial principles to novel or large‑scale problems. This flexibility allows humans to consider consequences across broader scopes and timeframes, enabling solutions that intuitive rules cannot reach.
The tension between the two systems explains persistent moral conflicts: intuitive, rule‑based judgments often resist tradeoffs and instrumental harms, while deliberation favours efficiency in general welfare. Greene argues that recognising this architecture shows why commonsense morality fails to scale and why we need institutional “moral machinery” and a metamorality. These are procedures that employ deliberative, impartial reasoning when solving intergroup and large‑scale collective problems.
III. Common currency
A splendid idea
This chapter presents the proposal that impartial, welfare‑maximising reasoning — a metamorality — can resolve conflicts between competing tribal moral codes. Greene calls it "splendid" because it generalises the intuitive moral insight that we should treat similar interests similarly, that is, extend that impartiality beyond the tribe to all affected individuals.
He argues the idea is attractive because it promises clear decision procedures (weigh costs and benefits across people), scales to large, anonymous problems, and can be implemented via institutions and rules that align incentives (taxes, regulations, markets, rights). The challenge is that this impartial ethic often clashes with ingrained intuitions about personal rights, loyalty, and sacred values, which generate strong emotional resistance.
Greene admits practical limits: full utilitarian calculus is difficult in messy real‑world cases and may require constraints to respect important moral intuitions. Still, he defends a pragmatic, consequentialist metamorality as a tool for adjudicating intergroup disputes and designing institutions — using deliberation where intuition fails — to improve collective outcomes while seeking public justifiability.
In search of common currency
Greene investigates whether there is a single evaluative metric — a "common currency" — that can translate diverse moral values into comparable units for resolving conflicts between tribes. He frames the problem as: without a shared scale for weighing harms, benefits, loyalties, and rights, moral disagreement becomes intractable because different tribes prioritise incommensurable interests.
He argues that impartial welfare (general well‑being) is the most promising candidate for common currency because it generalises the moral intuition to "treat like cases alike" and scales to large, anonymous problems. Welfare comparisons allow cost–benefit tradeoffs and decision procedures that can mediate disputes where tribal intuitions conflict.
Greene acknowledges practical and philosophical challenges. Measuring welfare is difficult, some values (e.g., rights, sacred commitments) resist quantification, and people’s affective responses may reject instrumentalising certain interests. He therefore proposes a pragmatic approach — a behavioural metamorality constrained by respect for deep moral intuitions and institutional safeguards — using welfare as the default currency while accommodating some non‑general constraints when public justification requires them.
Common currency found
Greene argues that impartial welfare can function as a practical "common currency" for adjudicating conflicts between competing moral codes. By converting harms and benefits into welfare terms, disparate values become measurable by the same standards, allowing cost–benefit reasoning to guide resolutions to intergroup disputes and large‑scale coordination problems.
He maintains deliberative, behavioural judgment — though often counterintuitive — can override tribal intuitions when those intuitions block policies that improve overall well‑being. Measurement and normative objections (sacred values, rights) pose real challenges, but Greene proposes treating welfare as the default public standard while accommodating certain constraints and institutional safeguards where public justification requires them.
Finding this common currency, Greene concludes, enables the design of institutions and decision procedures (taxes, regulations, permits, rights framed to protect welfare) that align individual incentives with collective good and reduce destructive conflict between tribes.
IV. Moral convictions
Alarming acts
Here Greene examines how certain actions provoke intense moral alarm — strong, automatic prohibitions — even when those actions could, in principle, be justified by impartial consequences. Greene shows that emotional responses mark some harms as specially salient (directness, personal force, violation of sacred norms), producing judgments that resist tradeoffs.
He links these alarms to evolved social functions. Quick, affect‑driven prohibitions help maintain cooperation and trust within small groups by forbidding behaviours that would corrode social bonds. But those same alarms can obstruct solutions to large‑scale problems, since they make people unwilling to accept instrumental harm or sacrifice, even when doing so would improve overall welfare.
Greene argues we must recognise when moral alarm is useful and when it misleads. For large, intergroup dilemmas, deliberative judgement and institutional design should sometimes override instinctive alarms. However, it is also necessary to respect deeply held values where public justification demands limits on purely consequentialist calculations.
Justice and fairness
Justice and fairness in Moral Tribes occupy the middle ground between tribal intuitions and impartial moral reasoning. Greene shows that people naturally care about fairness —reciprocity, equal treatment, and just deserts — because these norms support cooperation within small groups. Fairness intuitions are fast, emotionally charged and sensitive to intentions and deservedness.
At the same time, impartial justice (concern for overall welfare and equal consideration of interests) can demand redistributive or aggregate‑maximising measures that conflict with common fairness instincts. Greene highlights tensions: people punish perceived cheaters even at personal cost, resist blatant instrumentalisation of individuals and judge fairness differently when outcomes versus procedures are emphasised.
Greene’s diagnosis is pragmatic: fairness norms are essential for social life but can misfire across groups or on large scales. He advocates supplementing fairness instincts with deliberative decision procedures — using a welfare‑based common currency while preserving procedural protections and some kantian constraints based on duty — in order to craft institutions that are both publicly justifiable and effective at solving collective problems.
V. Moral solutions
Deep pragmatism
This chapter presents Greene’s pragmatic proposal for resolving intergroup moral conflict. He suggests adopting a public, decision‑procedural metamorality grounded in impartial welfare while remaining sensitive to entrenched moral intuitions. Rather than claiming a single philosophical theory is absolutely true, this approach treats moral reasoning as a tool for solving practical coordination problems among diverse tribes.
Greene argues for three core commitments. First, use impartial, consequentialist calculations as a default mechanism for adjudication, aiming to maximise collective welfare when deliberation and evidence allow. Second, implement those judgments through institutions and publicly justifiable rules that align incentives, making decisions stable and enforceable. Third, constrain pure aggregation where compelling duty or sacred values demand special protection, allowing exceptions to maximise cooperation and legitimacy.
The aim is pragmatic: produce workable, stable solutions to large‑scale dilemmas (climate, poverty, war) by combining deliberative calculation, institutional design, and respect for deep moral reactions. This means overriding intuitions when they produce destructive stalemates, but accommodating them when necessary to secure cooperation and public acceptance.
Themes
Cooperation
Cooperation between groups is often undermined by self-interest or a group’s own sense of morality. The world is changing rapidly, but humans are still biologically much the same, that is we are wired as hunter-gatherer tribes. Evolution has given us the skills to cooperate within groups, but unfortunately our ability to cooperate between groups still leaves much to be desired. The history of conflict is enough to tell us that.
Mutually beneficial cooperation is endangered by many things, but the clearest threat is what’s known as the tragedy of the commons. This is the conflict between self-interest and collective interest.
A second threat to mutually beneficial cooperation is known as the tragedy of commonsense morality. This time it’s a question of Us versus Them. In other words, one group sets its own values against those of another.
An example of this mentality is demonstrated by the story of the Danish political newspaper Jyllands-Posten. In response to the Islamic hadith forbidding visual depictions of the Prophet Muhammad, it published a series of cartoons satirising Muhammad in 2005. Global media outlets followed the controversy. Before long, violent protests sprang up around the Muslim world. Over a hundred people were killed, and Danish embassies in Syria, Lebanon and Iran were set on fire. The two groups – Danish journalists and Muslims – were each fighting for what they saw as commonsense morality. The journalists hated feeling censored, while Muslims didn’t want their religion disrespected. But the end result was conflict. This is how commonsense morality can lead to tragedy.
Utilitarianism
Utilitarianism recognises that each of us deserves equal happiness but undervalues people’s rights in the process. The philosophy holds that the most important concern when making moral decisions is happiness.
An example is to imagine that a train carriage is hurtling out of control toward five railway workers. If struck, they will be killed. You are standing on a footbridge overlooking the tracks. Next to you is another man carrying a large backpack. You realise the only way to save the five workers it to hurl this heavily loaded man onto the tracks below. This would kill him instantaneously but also stop the carriage and save the workers. So is pushing the man off the bridge morally acceptable? According to the principles of utilitarianism, you’re going to have to give him a push. As each life is equal, this will ensure the greater happiness of the five at the cost of one life. However, the problem with utilitarianism is that it clearly doesn’t value individual rights highly. That’s because utilitarians think it’s morally acceptable to overlook an individual’s happiness if the end result is greater overall happiness.
If we use utilitarianism to make moral decisions, we shouldn’t forget the rights of individuals in the process. These rights should not be dismissed just because the happiness of a majority group is quantifiably larger.
Dual mode of moral thinking: automatic or manual.
Human moral judgment operates in two interacting modes. The first is an 'automatic' mode: fast, intuitive, emotionally driven reactions that arise without deliberation. These are shaped by evolved instincts, cultural norms and immediate social cues. They show up as gut feelings of disgust, empathy, or anger and often guide quick moral verdicts (e.g., immediate outrage at perceived betrayal). Automatic moral responses are efficient for everyday social coordination but can be biased, culturally narrow, and insensitive to abstract principles or unusual tradeoffs.
The second is a 'manual' (deliberative) mode: slow, effortful reasoning that evaluates principles, consequences and reasons. This mode can override or refine automatic reactions by applying impartial rules, cost–benefit calculations, or perspective-taking. Manual moral thinking enables principled consistency (e.g. applying fairness or utilitarian logic), reconciliation between conflicting values, and moral reasoning across cultures or novel situations. It requires cognitive resources and motivation, so it’s used selectively, often when stakes are high, when automatic intuitions conflict, or when social norms encourage reflection.
Both modes interact: deliberation typically builds on or responds to intuitions, sometimes rationalising them, sometimes correcting them. Effective moral judgment and cooperation depend on a balance, letting fast, social instincts guide routine interactions while using reflective reasoning to handle conflicts, exceptions, and moral progress.
Pragmatism vs. Intuition
Deep pragmatism and moral intuition represent two different approaches to moral judgment and decision-making. Moral intuition refers to fast, automatic, emotion-driven responses that evolved to regulate cooperation within small groups. These responses produce strong, often dutiful judgments about rights, fairness, and purity that help maintain trust and social cohesion among people who interact repeatedly. Deep pragmatism, by contrast, advocates a reflective, impartial, consequence-focused stance. It treats moral disagreements as coordination problems and uses reasoned assessment of outcomes to design rules and institutions that maximise overall well-being across diverse groups.
Where intuition excels — everyday interpersonal contexts, enforcing basic fairness, protecting individuals from exploitation — pragmatism can falter if it ignores deep-seated moral sentiments that sustain trust. Conversely, where intuition struggles — large-scale collective action problems, intergroup conflict, and policy contexts requiring trade-offs — pragmatism offers tools for weighing consequences, creating stable rules, and minimising systemic harms.
Greene’s proposal is not to discard intuitions but to subordinate them when they would cause far greater harm, while translating impartial reasoning into institutional safeguards that respect intuitions where feasible. The practical upshot is a hybrid stance: honour instinctive constraints that preserve trust and rights, but apply impartial, outcome-orientated reasoning to resolve cross-group problems and to design institutions that prevent recurring moral conflicts.
No comments:
Post a Comment