Interview with Mehmet Ali UĞUR

Mehmet Ali UĞUR born in 1971 in Sorgun, Yozgat, Dr. Uğur graduated from the Department of International Relations at Marmara University in 1993 and worked as a Research Assistant at Sakarya University between 1994 and 1998. During this time, he pursued graduate studies in security at METU before moving to the United States on a scholarship. He first attended the American Law and Legal English Institute at the University of Delaware, then earned his MALD (Master of Arts in Law and Diplomacy) from Tufts University’s Fletcher School in 2000, where he later continued his doctoral studies in international law, negotiation, integration, and global governance, focusing his dissertation on international environmental law and regime-building. After working in the U.S. private sector from 2004 to 2009, he returned to Turkey and has been a faculty member at Yalova University’s Department of International Relations since 2010. His teaching and research cover political history, international law, negotiation theory and practice, and the Asia-Pacific region, with particular interest in the Arctic and other emerging global areas.


Question 1: What are the legal foundations of the International Criminal Court and its role in upholding global justice?

The International Criminal Court is, in fact, a relatively new institution. As you know, its final statute was drafted in the late 1990s and, starting from the early 2000s, it gained functionality once a sufficient number of states ratified the text. In international legal structures, a statute serves as a foundational legal document, think of it as a treaty. The Court became functional after the required ratifications in the early 21st century. However, this did not happen overnight.

When I discuss any concept related to international law, I do not prefer to begin directly with binding rules like in a municipal system. Law students probably approach it this way: they begin with a rule and proceed accordingly. However, as students of international relations, we tend to focus more on the historical as well as philosophical background of such institutions. This is the essential difference between scholars of international relations and law. Let’s think of it as a complementary relationship when it comes to international law. The idea itself can be traced back to the mid-20th century, or even earlier. I won’t go too far back, but at the end of World War II, in response to the massive destruction and loss of life during the war, the victorious states felt that something had to be done to prevent such an atrocity to happen again. I view this reaction as a revival of morality against the law we knew by then; however, the way it was implemented is certainly open to criticism from both legal and political perspectives. Still, the core idea that what measures should be taken to prevent such a catastrophe from recurring is not a bad one. At the very least, this intention deserves to be acknowledged.

Up to that point, the system had not yet fully developed. However, if we take the Geneva process which followed the Crimean War and the American Civil War in the mid-19th century as a reference, we can speak of a historical narrative that spans roughly 100 years. During this century, despite various codifications, conventions, meetings, and well-intentioned efforts aimed at ensuring wars were fought more humanely and minimizing unnecessary suffering and casualties, two world wars could not be prevented. Eventually, this Western-centered system expanded to form what we now refer to as the international community, a global system encompassing every part of the world.

In the aftermath of World War II, the idea of accountability was particularly vivid due to the devastating impact of the war. We can use an earthquake and the reaction of people as a metaphor: people become highly alert and prepare emergency kits only to forget about them in a few years, until the next catastrophe. Post-war atmosphere was marked by heightened awareness about preventive measures, which gradually faded over time. During this period, it became necessary to identify the party responsible for the outbreak of the war. Who committed the crime, and how was it committed? A similar situation had occurred after World War I: a culprit had to be found, and Germany was held accountable. For example, France established control mechanisms in various regions, most notably the Ruhr area, and generated revenues from these areas for reconstruction purposes. In other words, entire German nation was punished.

For the first time after World War II, influenced by American idealism, the idea of punishing individuals rather than an entire nation emerged. This marked a significant shift. When punishment is imposed on a nation as a whole, other parties tended to see themselves as innocent and justified all their war efforts. None of the warring sides, today or in the past, would admit that they violated the law. Since the Renaissance and the Enlightenment, waging war in line with national interest had been framed as a rational act. As such, each party maintains its own sense of legitimacy. So, in such a context, how can war be prevented or states be deterred from engaging in it? Perhaps, if decision-makers consider the legal consequences they might face afterward, they may be dissuaded from going down that path. With this idea in mind, the concept of a legal structure that could hold individuals personally accountable began to take shape.

The idea of punishing individuals, mostly decision-makers and political figures, rather than entire nations was unprecedented until that point. It happened for the first time in this context. The victorious powers established two tribunals, Nuremberg and Tokyo, and by doing so within the territories of the concerned countries, they wanted to show to the rest of the world that justice was upheld in both Germany and Japan without punishing these two nations. The Nuremberg Trials were held in Germany to prosecute German politicians deemed responsible for the horrific loss of life. High-ranking Nazi officials were tried there. Similarly, the Tokyo Trials were set up for Japan, where key Japanese decision-makers were prosecuted.

These two examples are highly controversial and have been debated ever since. Critics argue that individuals were prosecuted for crimes that had not been clearly defined at the time they were committed. According to legal principles, a prosecution requires that the rule in question must have been in force at the time of the offense, and that the accused must have been aware of it. However, in these cases, legal rules were introduced retroactively. For instance, a crime allegedly committed in 1942 was defined retrospectively under legal norms that emerged only after the war, in or after 1945. From a legal-technical standpoint, this is open to criticism. Nonetheless, these actions were broadly accepted by societies that were traumatized by the war. The important point was to identify someone responsible for the mass deaths. If we consider any act of terrorism, we know that those who have suffered loss expect some form of justice, someone to be held accountable. That is essentially what was done here, or at least, that was the intention of the winning side.

Whether this was right or wrong is not a question with a simple answer. However, the explicit decision not to punish the German and Japanese peoples as a whole, but only their top-level decision-makers, enabled these nations to be reintegrated into the international system by the very hands of their previous enemies. Their transformation into more peaceful states was also supported by the United States, which provided guarantees and had a direct role in shaping their constitutional structures; another important novelty of the post-war order.

Looking at the process from that point onward, it is not necessary to go too far back in time to observe that, for the first time in history, the post–World War II ad hoc criminal tribunals performed such a function. After that, the punishment of political leaders did not recur frequently. However, by the 1960s and 1970s, these precedents remained relevant, as the international system saw the inclusion of new members countries that were not among the founders of the United Nations but emerged following the decolonization of former colonial empires, especially in Africa and South Asia. This transformation significantly altered the composition of the UN. When the United Nations was established, there was a general consensus between the Security Council and the General Assembly. Although not precisely, they were somewhat conceived as executive and legislative branches, respectively. However, by the 1970s, disagreements between them began to surface. The number of new states surpassed that of the founding states, and this shift was reflected in voting patterns. The gap between the Security Council and the General Assembly started to widen. Although the two organs were originally intended to function in a complementary manner, this was not the case after 1970s.

Meanwhile, another UN body, the International Court of Justice (ICJ), continued to adjudicate cases. Yet debates emerged regarding its effectiveness. Why did it fail to produce substantial outcomes in preventing bloodshed even though we did not have another total war? This question helps explain why the idea of an International Criminal Court re-emerged. Although the ICJ was envisioned and idealized as a body with jurisdiction under the UN framework, in practice, it did not automatically assume authority in the event of crises or disputes. Unless the parties themselves agreed to bring a case before it, the Court could not issue binding rulings; it could only provide advisory opinions. As a result, this legal body failed to prevent bloodshed, conflicts, and the loss of millions of lives across different regions of the world. From the 1970s onward, with the increasing participation of newly independent states, we begin to observe an effort to merge two distinct ideas: the prosecution of individuals (as opposed to nations) and further institutionalization of the international legal system with a focus on equity and inclusiveness.

The legitimacy of war is a deeply rooted concept in Western legal thought, traditionally expressed through the term jus ad bellum. Although this notion has a history of nearly two thousand years, it was largely abandoned with the rise of positivist legal theory in the 18th and 19th centuries. During this period, the focus shifted from whether a war was legitimate to whether a state considered its own actions justifiable. A state’s belief in the righteousness of its own conduct became sufficient, and thus, the question of the legitimacy of war lost its significance in the West. Justifying war did not need any moral explanation other than the interest of the rational state.

However, the newly independent states that became integrated into the international system during the decolonization process particularly those in Africa, the Middle East, and Asia did not experience this Western historical transformation. For these states, the legitimacy of war remains a relevant issue with both moral and legal dimensions. The moral and legal justification of war is important to them not only in historical terms but also in the present day. Consequently, these states sought to evaluate the concept of jus ad bellum in conjunction with jus in bello, which governs the humanitarian conduct of warfare. In this way, the two concepts traditionally treated separately in Western legal thought gradually merged, prompting discussions that encompass both the legal justification and the conduct of war.

Reflecting this approach, in 1977, additional protocols were introduced to the 1949 Geneva Conventions, expanding the definition of armed conflict. Not only wars between two states but also all other forms of armed conflict were brought within the scope of international humanitarian law. This revision was particularly aimed at enhancing the representation and influence of African states within international law. However, these protocols were not ratified by many Western states. Fearing that their own military operations could be subjected to heavy criticism, these countries remained hesitant. As a result, the practical impact of these measures remained limited. Within this broader context, the idea of individually prosecuting decision-makers responsible for wars re-emerged, especially among states with high levels of welfare and low security threats, as well as among idealistic legal scholars. According to this perspective, it is no longer practicle to hold states accountable as we cannot put limits on state sovereignty; individuals involved in the decision-making process should be held accountable. This approach envisions a judicial mechanism in which reliable and independent legal experts deliver judgments without waiting for states to take action collectively.

However, it should not be overlooked that, in the past, states have been reluctant to relinquish their national sovereignty. Although the International Court of Justice (ICJ), established in the 1940s, was envisioned as a fully authorized judicial body with automatic jurisdiction, it evolved into an organ acting on the basis of consent. The ICJ cannot issue binding decisions unless both parties agree to its jurisdiction; it can only offer advisory opinions. This has prevented the establishment of an independent and automatically functioning international judicial system.

In this context, the adoption of the Rome Statute and the establishment of the International Criminal Court (ICC) marked a new phase in the development of international criminal justice and can be seen as the product of idealistic aspirations. The ICC was designed as a supranational legal body capable of delivering judgments on behalf of humanity, without requiring prior consent from states. Just as national courts render decisions “on behalf of the people,” the ICC claims to render decisions “on behalf of humanity.” However, while this structure is deemed acceptable by some states, others criticize it as a mechanism that assigns legitimacy to itself unilaterally.

As a result, the number of states that have signed and incorporated the Rome Statute into their domestic legal systems has remained limited. This very fact hinders the universality of international criminal justice. All in all, the evolution of international criminal law is indeed a by-product of the tension between idealism and the political realities.

When examining legal and governance regimes, whether legal or commercial, it would be misleading to assume that a regime becomes popular or highly functional merely because a large number of states have joined it. Take the international regime for the protection of whales, for example. It has been in place since the 1950s. Even if the majority of the world’s states sign and implement it, if major whaling nations such as Japan, Norway, and Canada do not sign or comply with it, then the regime becomes ineffective regardless of how many others have ratified it.

Similarly, the effectiveness of the International Criminal Court, which emerged with humanitarian ideals that resonate on every continents, is significantly undermined without the support and ratification of major powers. Therefore, the number of signatories is not as telling as which states have chosen not to sign or ratify the statute. This determines the practical functionality of the ICC. Accordingly, the ICC’s place within this historical trajectory spanning at least 150 years since the early development of humanitarian law should be assessed within that broader context. On the other hand, when we consider the post-World War II period, particularly the prosecution and punishment of individuals albeit through temporary tribunals such as the Nuremberg and Tokyo Trials we must acknowledge that some degree of success has been achieved. Yet, it cannot be deemed an absolute success. The assessment of this situation hinges on which states deemed the statute unacceptable, chose not to sign it, or refrained from completing the ratification process even after signing.

Question 2: The applicability of international humanitarian law in armed conflicts (the examples of Ukraine and Gaza)

The concept of war is an ancient one. Although it remains a practical and widely used term, and even though the meanings of certain concepts evolve over time, we often continue to employ it metaphorically. This persistence is largely due to linguistic habits that are difficult to abandon.

By the 1940s, the first efforts to codify the rules of war began to take shape. In fact, even earlier, following World War I, we see that the term “war” was still comfortably used in international treaties. At that time, inter-state conflict followed a well-established pattern, and there was essentially no alternative: conflicts typically occurred through formally declared wars. For instance, when a Serbian nationalist assassinated the heir to the Austro-Hungarian throne, it was accepted as a legitimate casus belli; the very next day, Austria-Hungary could formally declare war on Serbia.

In today’s context, however, declaring war is no longer so straightforward. Consider, for example, incidents in which India strikes certain areas in response to terrorist attacks, did Pakistan immediately declare war on India? It did not.

Beginning in the 1930s, steps were taken to prevent war from being used as an instrument of national policy. The origins of this shift can be traced to the Briand–Kellogg Pact. From that point onward, restrictions on the use of war by states became increasingly prominent. Ultimately, however, the decision still lay with states themselves: Should we retain the right to wage war or not? There is no overarching authority that can tell states, “You may not go to war.” There is no legal structure equivalent to a Pax Romana that could dictate, “You will only go to war if I declare.”

By the 1930s, therefore, the concept of war was being gradually sidelined in legal and political discourse. The prevailing attitude among states seemed to be: Peace is desirable, of course, but let us retain at least a minimal right to resort to war, just in case. No state has ever declared war since then, yet we observe belligerence everywhere.

The notion that war should not be used as a tool for pursuing national interests is closely linked to the concept of aggression, that is, wars should not be waged for purposes such as territorial expansion. However, if a state is attacked, it must retain the right to respond. In this sense, aggression is deemed illegitimate, while self-defense is considered justifiable. The reasoning is straightforward: I do not wish to be assaulted, but if I am, I will defend myself. The aggressor, not the respondent, should be deemed culpable. This idea lies at the heart of the principle of jus ad bellum.

Just before the outbreak of the Second World War, the Spanish Civil War took place. This conflict functioned almost as a rehearsal or simulation for World War II. The major powers were already inclined toward warfare, and Spain became a testing ground for these ambitions. Countries such as Germany, the United Kingdom, and Italy supported opposing sides, both materially and rhetorically, yet avoided direct combat with one another. What emerged, therefore, was an indirect war.

At the time, there was no established concept of civil war. Although Spain was engaged in an internal conflict, in practice, Germany and the United Kingdom were clashing indirectly on Spanish soil. Yet no formal term existed to describe this, since there was no declared war between two sovereign states.

After the Second World War, aggression and threats of force were categorically prohibited under the United Nations Charter, and all member states agreed to this ban. But does prohibiting threat and aggression make war impossible? Not exactly. Instead of declaring war, states began to employ their military capabilities in alternative ways. This led to the rise of undeclared wars. Since the word war is now largely prohibited in legal discourse, no normative framework can be built around it; consequently, such confrontations are referred to as armed conflicts.

How many types of armed conflict are there? As illustrated by the example of the Spanish Civil War, two main categories are often distinguished:

  1. Armed conflict between two parties that are signatories to international treaties, in which case humanitarian norms must be respected.
  2. Armed conflict involving a state and other actors, which is effectively a form of civil war.

The 1949 Geneva Conventions can be considered the ABCs of waging war in a humanitarian manner. However, by the 1970s, the wars of decolonization in Africa began to produce conflicts that did not fit neatly into either of these two categories.

Take Nigeria, for example. At the time, it was not yet fully independent but was in a transitional phase in which independence from the United Kingdom was anticipated. Within this context, internal violence erupted. This raised the question: should it be classified as an internal disturbance within the British Empire or as an international armed conflict?

If it were to be deemed international, who would be the opposing signatory party to the United Kingdom? There was none, Nigeria was not yet a member of the United Nations. From the African perspective, this was a war; from the British perspective, it was an internal matter. The British position was: This is an event occurring within my empire. It is not an armed conflict but a law enforcement issue. To draw an analogy: if an armed group were to clash with the police on the streets of Turkey, would we call that an armed conflict?

Even today, we see attempts to frame certain conflicts as international in order to bring them under the scope of international humanitarian law. However, the state where the conflict occurs may insist: This is an internal security matter; it does not fall under the Geneva Conventions. This reflects a longstanding and fundamental disagreement, one that persisted well into the 1977 Additional Protocols and beyond. Determining which conflicts fall under which provisions of the Geneva Conventions remains highly complex and contested.

Consider the case of Syria: is it a purely domestic issue, or does it involve foreign states? Is it a matter of internal security, or should it be seen as a conflict managed by the Syrian state within its sovereign territory? In reality, the lines have become increasingly blurred. In a context where states engage in both direct and indirect conflicts, we encounter the concept of proxy wars. These multifaceted, multi-actor conflict scenarios may appear to be civil wars on the surface but, in fact, reflect broader and more complex patterns of confrontation.

In such cases, the norms of humanitarian law established by the Geneva Conventions are often not fully applied. This is largely because a significant number of the signatory states are directly or indirectly involved in these conflicts. It is unrealistic to expect parties engaged in active hostilities to enforce legal norms objectively. Each side tends to view itself as innocent while blaming the other, or it obstructs accountability processes to conceal its own complicity.

Moreover, identifying violations is not easy, it requires genuine verification. Reports and visual evidence sometimes suggest the use of chemical weapons, victims dying by burning, discoloration of the skin, and so forth. Women, children, and the elderly are often killed in large numbers, sometimes by the hundreds or thousands. It is evident that such deaths are unlikely to result solely from conventional weapons. Yet, these claims often remain unproven, as all parties blame one another and no impartial investigation is conducted.

A similar pattern can be observed in Gaza. Attacks justified on the basis of self-defense frequently push the boundaries of humanitarian law. According to Israel’s official narrative, the goal is to neutralize terrorists. Civilians are told: Separate yourselves from the terrorists so that we do not harm you. If civilians fail to leave, two assumptions are made: either they support the terrorists and are therefore considered combatants, or they are being used as human shields against their will. As a result, civilian casualties are rationalized as collateral damage.

A comparable legitimation strategy is evident in the Russia–Ukraine conflict. According to Russia, this is not an act of aggression but a special military operation. Given Russia’s veto power in the UN Security Council, it is nearly impossible to impose international legal sanctions, revealing the structural limitations of the UN system, particularly when a veto-holding state is directly involved. Russia justifies its actions by claiming to combat Nazi movements within Ukraine. Since Nazism represents one of the gravest crimes against humanity, falling under universal jurisdiction, Russia presents its actions as a form of universal punishment consistent with this principle. According to this logic, the world should support Russia’s efforts, but since it does not, Russia argues that it bears the burden alone and therefore cannot be held accountable.

Returning to Israel, the dominant discourse states: We were attacked, and we are merely responding. From Israel’s perspective, the period before October 7 was relatively peaceful. Hamas, on the other hand, claims that the original aggression dates back to 1948, with the founding of the State of Israel. Thus, Hamas justifies its struggle through the principle of self-determination, one of the most fundamental human rights recognized in the 1948 Universal Declaration of Human Rights.

Unfortunately, the debate over jus ad bellum, that is, the legitimacy of going to war, continues to this day. The unresolved nature of this debate remains one of the greatest obstacles to the effective implementation of humanitarian law, the minimum standard of humane conduct that must be upheld even during armed conflict. Parties to war claim moral superiority by asserting that theirs is a just war. This, in turn, produces a system in which the rule of law is undermined and the narrative of the more powerful party becomes legitimized.

Actors who consider themselves morally superior tend to dehumanize their opponents, often portraying them as “subhuman.” This rhetoric is explicitly echoed by some Israeli officials, who claim that the people of Gaza do not truly suffer. As a result, conceptual boundaries become blurred. Within Israel’s “just war” narrative, civilian casualties are indirectly blamed for obstructing the war effort, either because they are assumed to serve as human shields or because they are believed to protect combatants. Consequently, these civilians are portrayed as partially culpable for the continuation of the conflict.

Question 3: The status of non-state armed actors under international law

The concept of war, specifically, the capacity of states to wage war, was codified in international law throughout the period encompassing the World Wars, the interwar years, and the immediate post-war era. Consequently, the legal and conceptual frameworks of that time largely revolved around the notion of war as an activity conducted between states, since the idea of “non-state armed actors” did not yet exist within the prevailing discourse.

However, with the process of decolonization, particularly during the 1960s and 1970s, such actors began to emerge for the first time. The term “guerrilla movement,” referring to non-state armed groups, was coined to grant these actors a form of status and to conceptualize a new category within the international security framework. Although “guerrilla” is a Spanish term, it was widely adopted by these groups themselves.

The core issue concerning these groups stemmed from the challenges they posed to states, especially colonial empires, where a notable convergence of interests between the United Kingdom and the United States could be observed. During ongoing United Nations deliberations, it became evident that Roosevelt and Churchill had reached a consensus in their Atlantic discussions that colonialism could no longer persist in its existing form, namely through anti-status arrangements. Although initially excluded due to its occupation, France was later persuaded to accept this position. Thus, the two most significant European colonial powers were compelled to relinquish their empires.

However, this process of relinquishment needed to avoid triggering chaos; it had to be orderly and controlled. For example, decolonization began in a place where many believed “Britain would never relinquish control”: India, which gained independence in 1947. From the mid-1950s onward, independence movements rapidly spread across Africa. Ghana became the first African country to achieve independence from the United Kingdom.

The fundamental problem facing Africa was this: how could the continent avoid descending into unmanageable chaos during the independence movements? It was decided that the existing administrative boundaries would be recognized as state borders, and that these lines were not to be altered. This decision was based on the understanding that the borders had not been drawn according to linguistic, religious, or ethnic unity. At that time, the territories designated by colonial powers existed as administrative units, but there was no assumption that, for example, “those who speak this language should belong to this state.”

If a European model of statehood, based on ethnicity or language, were applied, the resulting situation would be impossible to manage. Such an approach would lead to widespread conflict, the dissolution of borders, and catastrophic bloodshed. Therefore, Africans were told: You will be granted independence, but do not alter the borders that were once drawn for administrative purposes. On the surface, this was accepted, but in practice, it led to significant conflicts in certain regions, such as in the formation of Nigeria and the demarcation of the Cameroon border. Examining African borders today reveals how arbitrary many of them are.

Armed groups, inspired by the 1948 Universal Declaration of Human Rights, took up arms to establish their own nation-states. At this point, a fundamental dilemma emerged: which norm should take precedence, the right of peoples to self-determination, or the authority of existing states to suppress armed uprisings within their territories? These two norms collided, producing a state of relative chaos across Africa during the 1970s.

These armed groups defined themselves as the armies of nascent, yet unrecognized, states. European powers, however, regarded them merely as armed entities not representing any legitimate state. Since sovereignty had not yet been conferred upon them, international law struggled to categorize their status. Instruments such as the 1949 Geneva Conventions proved insufficient to address such cases. This legal ambiguity became increasingly evident during the 1970s as more examples appeared.

Another illustrative case is the American intervention in Vietnam. Against whom was the United States fighting? Was it supporting Vietnam as a whole or opposing an unrecognized state entity? The U.S. referred to its adversaries as North Vietnam, yet according to the Viet Cong, North Vietnam did not exist, the entirety of Vietnam belonged to them. From their perspective, the U.S. was the occupier, and those collaborating with it in the south were traitors. Which narrative reflects reality? Both coexisted simultaneously, depending on perspective. A similar parallel can be drawn with Palestine: is the legitimate entity the Palestinian Liberation Movement (and its faction, Hamas), or the state of Israel? The answer depends largely on one’s standpoint.

The international community has yet to reach a consensus on how to identify the legitimate parties in such armed conflicts. This is largely because one side is often supported by one of the five permanent members of the UN Security Council, each possessing veto power. If a movement is leftist and anti-American, it is unlikely to be recognized as legitimate by the United States, which may deem a military response necessary. Conversely, the Soviet Union often adopted the opposite stance. Given that the Cold War era was characterized by deep ideological divisions, recognizing non-state armed actors, granting them legal status, or applying international humanitarian law to them was largely impossible.

To apply humanitarian law to an actor, that actor must first be recognized as a party to the conflict. However, when an actor lacks formal recognition, its actions fall into a legal vacuum. It is often said that terrorists are not to be negotiated with, a principle that is correct in theory. Yet if a terrorist group says “We are laying down our arms”, can one respond, “No, you may not, you do not exit.”? Would this not imply a form of recognition and engagement? This was a major dilemma during the Cold War.

Following the end of the Cold War, the situation evolved. Non-state armed groups began to be interpreted more frequently as self-determination movements. States without constitutional crises or fears of territorial fragmentation adopted a more accommodating attitude toward granting limited legal recognition to these groups. However, this approach created serious complications for other states facing separatist movements of their own.

In the post-9/11 era, one side increasingly invoked the concept of terrorism, while the other tended to view such movements as representatives of oppressed peoples. This dichotomy further complicated the applicability of international humanitarian law. Broadly speaking, this situation can be analyzed in two phases:

  1. The pre-1990 Cold War period, marked by ideological bipolarity and the inability to clearly define the parties to armed conflicts arising from decolonization; and
  2. The post-Cold War era, particularly after the decline of one global pole, when similar movements came to be seen more often as expressions of human rights and the principle of self-determination.

This interview was conducted by Uğur Can Özkan & Esma Akçiçek


Discover more from Reymonta

Subscribe to get the latest posts sent to your email.

Leave a comment

Discover more from Reymonta

Subscribe now to keep reading and get access to the full archive.

Continue reading