The Systemic Problems of Contemporary Academia
Since the beginning of the Emancipation as Navigation Summer school, I have had numerous discussions with people about the state of contemporary philosophy, and the state of contemporary academia more generally. Some of my thoughts on the matter are expressed in the posts on the Transmodern Philosophy blog accompanying the Summer school, and others were expressed during the first public panel. I’ve had numerous questions put to me about the perspective out of which these thoughts were developed, as people have rightly surmised that there’s a certain systematic account of academia underlying them, but this is an account that I’ve never actually published in any public forum. I did begin writing something on this topic just over two years ago, an essay somewhat ambitiously titled ‘The Systemic Problems of Contemporary Academia and their Solution’, but, although I was quite happy with my analysis of the problems, it turned out to be much harder to articulate their solutions (somewhat unsurprisingly). This isn’t to say that I didn’t (or don’t) have some ideas about this, but rather that the amount of effort required to seriously think them through within the framework I’d laid out was too great to justify spending the time on it (ironically, for reasons well explained in the problems section). Despite some abortive attempts to rework the material with the brilliant Fabio Gironi, I haven’t done anything with the portion of the essay that was completed. It seems to me that now is as good a time as any to put it out here, to give some background to the things I’ve said elsewhere, and to encourage some more discussion about the predicament we philosophers and academics find ourselves in.
After listening to and taking part in a number of discussions regarding the problems of current refereeing practices and related issues with philosophers over at the NewAPPS blog, certain ideas that have been floating around in my brain for some time now, following on from numerous similar conversations with other academics, have started to coalesce. It was initially my intention to address only one issue, and to propose a tentative solution to it, but the more I worked on presenting it, the more I was drawn into the general description of the problems of academia as a whole. I initially resisted this, but ultimately gave in and decided to approach the problem holistically. This essay is the result. I will get into the details shortly, but I will preface my discussion of them by announcing a general frustration, which I share with a number of my as-yet-to-be-established colleges. This is a frustration with most established academics’ unwillingness to discuss solutions to these sorts of systemic problems of contemporary academia. They’re often willing to bemoan the problems at length, but will shut down any even remotely speculative discourse on what we might do about it without a second thought. I do not think this a malicious gesture, but it is certainly a misguided one. As such, I’d like to begin by pre-emptively diffusing the means of such discursive closure.
A lot of these issues are not merely theoretical for me, but seriously existential. For instance, as an unaffiliated early career academic (a fancy way of saying unemployed with a PhD), I feel a serious pressure to drop work on what I consider to be my better and more important ideas because they are either risky, unpopular, or both, and to find the most minimal contributions to existing debates I can make and turn them into a series of safe and entirely trivial papers (I’m not supposed to even think about writing my monograph…). There are obviously practical options in between these two extremes (unconstrained intellectual freedom and unmitigated intellectual mediocrity), and I don’t expect there to be an easy option (what I’ve previously called the no-pain-no-game principle). However, the apparent range of choices is remarkably slight, and any attempts to find new possibilities consume the very cognitive resources I need to pursue them. It’s a delicate situation that I’d rather not be in. I want to think, but I also want to live too. My worry is that my desire to keep on doing the latter will compromise my ability to do the former, or that the current systems there are for deploying my own cognitive resources towards the larger social good are grossly inefficient. Too much playing the game and not enough work makes Pete a dull boy.
That said, although many people are willing to accept the existential aspect of these questions, they often use this very aspect to discourage theoretical engagement with them. This often amounts to a sort of one-two punch of counter-productivity: “Your abstract theoretical considerations have no serious application to concrete issues” (the pie-in-the-sky challenge) conjoined with “These concrete issues are too serious for abstract theoretical considerations” (the don’t-think-do imperative). If I received £10 every time an established academic used some variant of these arguments to tell me “Don’t rock the boat!”, I could abandon mainstream academia entirely and make a living out of talking to academics about these very issues (okay, maybe £100). I’m going to label all these various discursive tactics boat stabilisation manoeuvres.
I want to have serious conversations about the nature of contemporary academia, it’s problems, and what we can actually do about them, while I still have some cognitive resources going spare. All genuine solutions to practical problems involve understanding as a crucial component, and I want to cultivate this understanding. As such, whenever anyone asks me to “be realistic” from now on, I’m going to ask them precisely which modal operators (alethic, deontic, and hybrid) they’re using to define the space of ‘realistic’ possibility they’re talking about (I have a whole spiel about modal logic and practical reasoning I could give you here… but this is another example of an idea I’m not sure I have time to develop!). I may be under-informed about various material realities that constrain the space of possible action, but this does not make me naive. Let’s have real arguments here (inform me if you must!), rather than pragmatic gestures whose purpose is solely to close down such arguments.
Okay, that’s my preamble/rant done. Let’s get into the serious proposals. I’m going to provide the most concise account of the specific problems we face before attempting to pose some tentative solutions. In doing this, I’m going to try and occupy a sort of theoretical original position, in which I ignore my own theoretical commitments and thus which academic work and academic groups I consider to be doing whatever-it-is we’re doing well, with the exception of those theoretical commitments I need to analyse these problems and suggest some solutions. I can’t get rid of these commitments unfortunately. We are all aware that the original position is not the view from nowhere. Additionally, although I think my remarks apply to quite a large swath of contemporary academia, what examples I will provide will be drawn from my own discipline: philosophy. I can only apologise to any non-philosophers who stumble upon this piece for my narrowness of scope. These methodological remarks aside, stylistically, I’m going to divide problems and solutions into two (disproportionate) sections, and number the points within them to make the structure of my argument as clear as possible.
1. Let’s distinguish between two different problems that nevertheless overlap in important ways:-
i) Distribution: The principal means of distribution of academic work (e.g., print journals, publishing houses, institutionally-bound resources, etc.) have become increasingly divorced from their primary purpose. This is obvious insofar as the patently superior alternative means that have increasingly become available (e.g., electronic journals, print-on-demand, institutionally-free resources, etc.) are largely sidelined by academic practice. If our purpose is primarily to facilitate the transmission of academic work in such a way as to enable its continued production (or whatever-it-is we as academics contribute to the social good), rather than making whatever profit can be had from it (which is inevitably meagre on the part of most academics), then, all else being equal, the current systems of distribution are inferior to the alternatives that are patently possible.
ii) Assessment: The principal means through which the output of academics (and thus their quality qua academics) is assessed (journal referees, readers for publishing houses, hiring practices, etc.) have become increasingly convoluted and divorced from their primary purpose. If the purpose of these mechanisms is to facilitate a maximally fair (or if you’re my kind of traditionalist:just) distribution of the limited resources our societies grant us to do what we do (whatever-it-is), then they have become increasingly warped, insofar as they increasingly select for a set of properties (taken together as: selectivity) that bear little resemblance to our conception of what good academic work is (and thus also what good academics are).
The extensive lists of papers on a single narrow topic, displaying nothing but the most trivial additions to a series of narrow debates, that one increasingly finds on the CVs of newly minted academics are the Peacock’s feather of our profession. To have such a feather is certainly not to be unfit for academic purpose, but it doesn’t seem to be systematically correlated with fitness either, in anything but the most meagre fashion. Nevertheless, we are still told to keep the boat stable by faithfully putting as big a feather in our cap as we can find.
2. As I mentioned, these issues overlap, insofar as our systems of distribution and our systems of assessment intersect in important ways (e.g., journals, publishing houses, universities). However, the conceptual knife that we must wield mercilessly in thinking about these overlaps is this: we are both the producers and consumers of the works that are distributed, and the authors and the auditors of the works that are assessed. We compose both sides of the transactions out of which each system is built, and this means that it is only us who gives anyone authority to stand in between the origin and end of these transactions. Call this the principle of transactional symmetry.
To this no doubt people will respond that there are all sorts of pressures regarding the way the system is currently set up that act as obstacles to changing it. This is most certainly true. However, this fact should not of itself prevent us from effectively categorising these obstacles and working out how we overcome them. We perpetuate our own impotence to change these systems only by our unwillingness to think about the problems and attempt to solve them together. The simple structure of these mechanisms means we are in an incredibly powerful position to collectivise should we be willing. It is from within this frame of mind that we should approach all the subsequent issues. Call this the injunction for academic collectivism. To push the metaphor even further: maybe some co-ordinated boat rocking is what we need. Maybe the prisoner’s dilemma constituted by boat stabilisation manoeuvres has turned the boat into a prison ship, and if we all just move at the same time we can overturn it and swim to freedom (or at least a less oppressive boat).
3. Given this, the question of distribution is a lot less vexing than the question of assessment. The only good reason we might have to keep the current problematic systems of distribution, in favour of the patently possible alternatives is that they are somehow inextricably bound up with the effective bits of systems of assessment to which they are linked (call this the assessment rationale). However, I think it is obvious that at least some of the ineffective bits of our current systems of assessment actually stem from the way they are tied to mechanisms of distribution run by organisations whose interests are not necessarily the same as our own (whatever-they-are). To give but one example, the propagation of academic fads (which always occur to some extent) is catalysed by publishers seeking to capitalise on them, and this results in all too obvious distortions of our systems of assessment. This is not yet to mention the related issue of how these systems encourage self-sustaining academic cliques, which I’ll discuss in more detail later.
What we must bear in mind is that, collectively, we have almost all of the purchasing and labour power here. This is the upshot of transactional symmetry. Organised boycotts (such as the recent Elsevier case) can be extremely effective, though they will only become truly effective if they are broad ranging, well organised, and accompanied by the construction of viable alternative systems of distribution and assessment. Academics are often individualists by vocation if not by nature (or even principle). We’re encouraged to some extent to sell ourselves as products, both socially and economically, and this means that we’re encouraged to differentiate ourselves from our peers, even as we’re encouraged by different forces to cluster into self-identifying groups. We’ve got to be careful not to let this individualist tendency to undermine the necessary political collectivism that dealing with these issues demands.
4. Returning to the more serious issues then, let’s pose a serious question for ourselves regarding our systems of assessment: would the great academics populating the history our own discipline (Aristotle, Hume, Kant, Hegel, Russell, Carnap, Husserl, Heidegger, etc.) be rejected by our current system for the lack of a feather in their cap (or whichever selection metaphor you prefer)?
To this challenge many will respond that they would certainly, and this proves how healthy the system is. I don’t want to argue on a case by case basis here (theoretical original position, remember?), but rather point out a more general feature of the possible arguments we could have here. We all recognise that not everyone can be Aristotle, Kant, or Russell, or even Malebranche, Rheinhold, or Bataille. There is a ranking of quality in the history of academic philosophy as well as in contemporary academia. Such rankings are the whole point of having systems of assessment. We all want to move as high as we can in the rankings, but we all accept that it’s not up to us where we stand (no-pain-no-game). The best we can ask for is a fair shot at hitting the big time, as it were. Call this the principle of academic meritocracy.
Once we recognise this principle, we must realise that the above challenge is one of degree not one of kind. Just how far down the historical rankings must we go to find thinkers who would be excluded from contemporary academia? This leads to the following question: when we compare the figures who would be hypothetically excluded with those who are factually included, is the comparison a fair one or not? I suggest that it is likely to be substantially unfair. I suggest that the most pathbreaking and interesting work done in the history of philosophy would be systematically discouraged and even marginalised by the current systems of assessment. Again, this is too general a claim, insofar as it says nothing about the specific content of work done in the past and the present, and so nothing about the difficult qualitative distinctions that would have to be drawn.
No doubt, different locales within the contemporary academic terrain would fair better than others, and I do not wish to name names here for the sake of my general point (original position again). It simply seems clear to me that although there are important differences between what Lakatos might identify as progressive and degenerate research programs in academic philosophy and elsewhere, the current systems of assessment side with ossification against dynamism. Much as in the economy at large, the principle of meritocracy in the academic world has been picked at and undermined, to the point where academic social mobility has been seriously decreased. The point at which you can’t have interesting new thinkers coming out of left field and revolutionising aspects of different parts of your discipline is the point at which it has become systemically degenerative.
5. Let me tie some of these ideas together by combining two different nautical metaphors: my own idea of boat stabilisation manoeuvres and Neurath’s eponymous boat. Although we’re all individualists to some extent (see the tendency discussed above), we all have to recognise that academia is the socio-epistemic system par excellence. In the methodologically impoverished form we’ve been discussing it, it’s clear that academia is a collective enterprise. This is the whole point of addressing our problems in terms of (loosely symmetric) transactions between academics. If you don’t think academia is collective, then you reject the very idea that there are problems here, insofar as these are obviously problems of collective organisation. Given this, I’d like to suggest that instead of talking about Neurath’s boat, we talk about Neurath’s fleet (or Hegel’s amada, if you prefer).
The reason for this is that we’re not just dealing with one boat, but with a whole load of different ones corresponding to particular institutions (e.g., universities, journals, publishers, etc.), traditions (e.g., ‘analytic’ and ‘continental’ philosophy), sub-disciplines (e.g., philosophical logic, applied ethics, philosophy of social science, etc.), research programs (e.g., possible world semantics, (Husserlian) phenomenology, (Lewisian) metaphysics, etc.), debates, interests, or whichever social units you want to use in explaining socio-epistemic activity (including all of the above). Even if we can truly say there is a single fleet (rather than separate ‘analytic’ and ‘continental’ fleets, that occasionally exchange prisoners and canon fire), the ships within it are of different sizes and internal organisations, and they’re related to one another in complex ways (alliances, rivalries, trading relationships, etc.). There’s a lot of structure here, most of which I simply can’t capture without violating my own methodological restrictions. However, the reason Neurath is the admiral of this fleet is that each boat is constantly being rebuilt by it’s occupants (and the occasional contractor hired in from another ship). Academic progress is at it’s core a matter of challenge and revision. Some parts of our various boats are ancient and reliable, and others are brand spanking new, but none of them is in principle exempt from the process of communal redesign (with perhaps a few logico-mathematical exceptions, but even this is controversial).
In an ideal world, each boat would be built by an ad hoc committee of ship builders who were collectively self-determined in the most perfect way. There would be no messy factors confining individuals to any given boat (economic, political, historical, etc.), but simply a pure academic free for all. This is not the case, because institutions aren’t constructed in vacuums (architectural spinning-in-the-void), but are always attempts to make the best of the available constraints. Just to name the big constraint that has already been mentioned here: we have limited resources given over to academia by our societies, and this confronts us with the problem of how to apportion them fairly (see academic meritocracy). Each of us is socialised within a pre-given set of nautical institutions (I was raised mainly on the good ship ‘continental’, despite a few tours of other vessels), but we’re not destined to them.
Of course, if one doesn’t like ones boat, it is possible to jump ship (assuming another one is passing by and will let you on board), take charge and change things from within the existing institutional hierarchies (assuming there’s a good naval career path to officer rank), or even to build a new boat (assuming you’ve got the resources and some peers willing to pitch in). However, these don’t always work. Sometimes mutiny is necessary (ask Heidegger about Husserl and the neo-Kantians) or even outright revolution (ask Russell and Moore about British Idealism). There are lots of forms of social-epistemic upheaval, and it’s crucial to recognise that some of them involve boat rocking. Even if we think those that don’t rock their respective boats are preferably to those that do, we’ve got to recognise that there are some cases in which rocking, and sometimes overturning, our various institutional boats is a good thing. If we find ourselves on prison ships, it’s best to get militant and rock out. The fleet can only make steady progress if it is willing to restructure itself to overcome social pathologies that propagate in its organisational space. Pretending that all is well when the beams of several crucial trawlers are rotting away, many of the bridge officers are drunk, and nepotism has spread like a rash across different crews is, if not arecipe for disaster, than a strategy for mediocrity.
6. Okay, enough with the elaborate metaphors for now. Suffice it to say that boat rocking is sometimes necessary, and this is why boat stabilisation manoeuvres are sometimes counter-productive. The real question is why I think our systems of assessment are systemically degenerative, and why refusing to tackle this degeneracy head on amounts to a strategy for mediocrity. I think there is more to be said here than I could reasonably say, but I will try to pinpoint a few crucial problems:-
i) The Quantitative Death Spiral: The increasing emphasis on quantity of research over quality of research is often talked about under the cultural heading of ‘publish or perish’, but I think this name, although apt to some extent, masks the really dangerous aspect of this social dynamic: it is a positive feedback loop. The issue is that there is an increasing proliferation of quantitative metrics involved at the various points in the assessment process. This starts with the internal assessment systems which make the final decisions regarding who gets a share of the finite resources required to produce research (principally jobs and other forms of direct funding). However, it increasingly works its way upstream to the various external assessment systems which make intermediate decisions that inform the final decisions (e.g., journals, publishers, academic referees, etc.).
My thesis is that this is an information-theoretic problem. There are limited cognitive resources to sort through all the job applications and funding proposals that are received by the internal assessment systems that make the final decisions on non-cognitive resource allocation. This necessitates the creation of informational filters that can reduce information input to manageable levels, or, put another way, which turn intractable practical problem solving tasks into tractableones. It is a universal problem: problem solving requires filtration, so that it only deals with relevant information. This is the sole reason for the existence of external assessment systems that live informationally upstream from the internal ones. The purpose of their existence is to efficiently allocate cognitive resources to the task of allocating non-cognitive ones. The more claims there are upon these resources (e.g., the more PhD graduates there are looking for jobs and funding), the more important the problem of efficiently allocating our cognitive resources by setting up effective informational filtering systems becomes. This gives us a primary cause (though certainly not the only cause) of shifts in academic assessment mechanisms: at some point we succeed in producing more capable graduates than the systems for allocating resources to them can cope with, necessitating reconfiguration of those systems.
I think we could probably find various examples of such transitions in assessment mechanisms in the history of academia, some more extreme than others. I’m not well informed enough to discuss them in detail, nor do I have the time (limited cognitive resources!). What I want to suggest is just that the proliferation of quantitative metrics of assessment in both internal assessment systems (e.g., the involvement of qualitatively-blind stages (such as HR departments) within hiring processes) and in the external assessment systems that interface with them (e.g., the involvement of qualitatively-impoverished stages (such as networks of referees with overly narrow expertise/interests) within publishing systems) is to some extent driven by increasing quantity of inputs.
Once we recognise this fact, it’s not too hard to see the positive feedback loop. The more assessment systems are focused upon quantitative metrics, the more those competing for available resources will change their behaviour to meet those metrics. This means prioritising the quantity of their output. Increased quantity of input thus drives increased quantity of input via quantitative assessment mechanisms. This is what I’m calling the quantitative death spiral. If you aren’t conscious of this dynamic and actively aim to dampen it, then you can actually drive the proliferation of quantitative metrics inadvertently. What this does is make it much easier for the properties the selection mechanism is supposed to select for (academic quality), to diverge from the properties it actually selects for (academic selectivity). If you aren’t aware of the dangers posed by the ‘publish or perish’ culture, then by advocating the current state of affairs, you implicitly encourage a much worse one.
We’re all trapped in a race to the bottom whose end point is the complete divorce of academic quality from academic success. This is a prisoner’s dilemma if there ever was one. The final point to make about this is that the nature of the death spiral is not always apparent while the non-cognitive and cognitive resources available to us are increasing. Increases in these resources dampen the feedback loop (in different ways). However, we are all aware of what the current economic situation has and is continuing to do to our available resources. This catalyses the death spiral, and makes it ever more imperative that we do something about it right now.
ii) Network Crystallisation: There are a number of words that get thrown around to describe the negative social dynamics that emerge in academia: ‘systemic bias’, ‘arrogance’, ‘nepotism’, etc. I think there are obviously some specific cases in which these terms are applicable, but not all cases can be understood in terms of some, let alone all of them. I prefer to talk about academic cliques, which are self-sustaining networks of individuals involved in assessing the work of others whose persistence can potentially pull away from their usefulness. This is to say, more or less static social networks underlying our systems of assessment whose existence is to some extent independent of whether they’re good at selecting for academic quality. This is a pretty abstract definition, but it’s supposed to capture a whole bunch of phenomena, some of which are intentional (e.g., active nepotism within particular groups) and some of which aren’t (e.g., passive systemic bias against particular groups).
The point of defining academic cliques in this way is to define them in terms amenable to my description of Neurath’s fleet above. Sometimes, interpersonal relations between individuals can facilitate good academic practice, and sometimes they can hinder it, and when they hinder it significantly they can warrant boat-rocking strategies of various kinds. It is an assumption of my argument that the process of challenge and revision is at the very heart of academia (and human knowledge more generally). Given this assumption, it should be obvious that when stasis creeps in to the networks of assessors we use to discriminate quality of academic work and the procedures through which they do so, this can be counterproductive to the process of challenge and revision which drives academic progress. Let’s call the gradual hardening of the established systems of assessment so as to resist modification (be it good or bad): network rigidification. This rigidifcation is counterproductive when it serves to isolate ideas, practices, and institutions from effective challenge and revision without warrant. Let’s call such degenerative rigidifcation: network crystallisation.
What does the warrant that distinguishes good rigidifcation from bad crystallisation look like? Surely some things should be harder to challenge and revise than others? This is most certainly true. If our systems of assessment had no persistent social structure then we could not leverage the mutual recognition of quality implicit within extant social networks to assess the quality of new additions to these networks (or new output on the part of existing nodes) in reliable ways. If we did not have certain core practices for educating and socialising new members into these networks, and the loose collections of overlapping shared ideas that these facilitate, then the networks would themselves be impossible. For example, if we agree that it is imperative that philosopher x (e.g., Plato, Kant, Frege, etc.) must be taught in an undergraduate course, because being able to understand the dialectical terrain which emerges out of their work, address oneself to their core ideas, and communicate with others in the terms they provide is an essential element of philosophical training, then this implies not only certain shared educational practices, but also a loose network of authorities on x who can effectively monitor the overall state of work and teaching. This is all pretty abstract, but it should help make the basic point.
If we abstract from the actual content of the relevant ideas, and the disagreements we might have about them (original position again), we find that practices and institutions play important functional roles in maintaining the overall academic process of challenge and revision through which our knowledge collectively improves. We can argue about whether any given practice or institution is functioning well on the basis of specific facts about the area it is in, and here we have to let down the original position to some extent. However, we can say that, in general, because the function of such systems is to facilitate the challenge and revision of ideas, then it’s insulation from challenge and revision qua practice/institution is unwarranted if it actively hampers this process, i.e., if the limited stasis (rigidity) it introduces does not yield dividends in different forms of dynamism. Returning to the example, the point of having the loose network of authorities onx should be to breed better authorities who can potentially overturn the interpretations they were raised on. This means that any network which is resistant to challenges beyond the level required to facilitate good challenges has crystallised.
What makes this a systemic issue then? The increasing drive towards unification and standardisation of assessment systems is most certainly a form of rigidification. As I’ve been trying to argue, this is not necessarily a bad thing in itself. Unification and standardisation can breed explicitness in assessment, and explicitness is certainly a virtue. The question is to what extent this increasing interpenetration and centralisation of assessment systems has encouraged local forms of crystallisation. Academic cliques are an inevitable danger of the nature of academia. They are not new, and we should not pretend that we are able to get rid of them (it’s very much a baby/bathwater situation). However, one must be able to ask whether the systems of assessment we have (and the trends in their development) are encouraging fewer or more academic cliques to form, and whether these cliques are better or worse, in terms of the distortive effects they have upon quality assessment.
I do not think that all aspects of our assessment mechanisms and the trends in their development are degenerative. Not everything is crystallising. But it is happening nonetheless. The more it happens, the more one is faced with the challenge of just who one writes one’s work for. Call this the tyranny of the audience. One must always bear one’s readers in mind when producing work. It is simply good practice. One is always writing for someone, and one must do one’s best to make one’s work accessible and interesting to them, especially when they’re going out of their way to read it for the purpose of assessing the quality of your ideas (no-pain-no-game). However, there is a point beyond which the amount of effort it takes to select which journals/publishers/etc. might be sympathetic to one’s ideas, and work out just how to present these ideas so as to get through the information filters they’ve set up and the biases of the networks that underlie them, starts to seriously impact upon the quality of work one is producing. When the amount of effort it takes to produce a piece of work and get it assessed (i.e., published) goes beyond a certain point, one is actively encouraged to take fewer and fewer risks in the work one does. Discouraging authors from taking risks (be they substantive, stylistic, etc.) unless they think they can succeed can be good up to a point. Some risks aren’t worth taking. However, beyond this point one simply encourages authors to stop taking worthwhile risks.
To summarise: network crystallisation leads to the tyranny of the audience, which in turn leads to mandatory dullness. Crystallisation makes it harder for non-established positions to establish themselves, because it makes the amount of work an author has to do to publicly defend their position not merely proportionally greater in relation to how unusual or bold it is (this is expected: no-pain-no-game) but exponentially greater. The risks one takes in writing something unusual, bold, or both, are not merely ‘academic’ risks, but ‘professional’ ones. One has limited time and resources to work on publications that one desperately needs to secure ones means of subsistence, so one is under serious pressure to play it safe. The harsh truth is this: the less established an academic is, the less interesting work they are encouraged to do. Once we remember that challenge and revision are the core of the whole academic enterprise, this lets us see that network crystallisation actually leads to a reduction in overall quality of work, insofar as the conceptual roads less travelled become increasingly barred to travel.
The final point under this heading is that network crystallisation interacts with the quantitative death spiral in a particularly degenerative way. There are at least three ways in which this happens: a) quality dilution: the increased emphasis on quantitative output reduces the cognitive resources authors have available to do anything but play it safe, the more they have to write, the less interesting what they write often ends up being, through no fault of their own, b) assessment capture: the more dependent internal assessment mechanisms become on external assessment mechanisms, the easier it is for their decisions to become captured by dysfunctional cliques underlying these mechanisms (e.g., taking information without filtering it for bias, etc.), c) crystalline complicity: boat stabilisation manoeuvres and mandatory dullness reinforce one another so as to often become indistinguishable, we are encouraged to labour under the crystallisation of assessment networks for the safety of our means of subsistence, only to pass on a potentially worse situation to the generation who comes after us, with the promise that we too will be able to tell them: “It’s your turn now!”.
iii) Fashion Unbound: If we can understand network crystallisation as the problem of stasis within assessment systems, then we must also recognise the complimentary problem of dynamism outside of such systems. If you systematically exclude certain kinds of challenge to established ideas/practices/institutions from within the systems of assessment, then these challenges end up routing around the systems of assessment entirely, and in doing so they often pick up momentum as others with similar concerns jump on board, becoming full fledged fashions in the intra-academic spaces between disciplines and institutions. This is not necessarily a bad thing. As I’ve already mentioned, sometimes radical boat rocking of various kinds is necessary. However, when these fashions are entirely unbound from systems of assessment they are subject to a number of potential dangers.
Just as academic production tied to systems of assessment runs the risk of academic cliques, so academic production unbound by such systems of assessment runs the risk of academic fads. If the former are social networks whose persistence pulls apart from their success in assessing quality, then the latter are ideas/practices/institutions whose persistence pulls apart from their success in producing quality. These fads can suck up resources that otherwise would be put to better use, insofar as they develop their own ways of tapping into resource allocation systems. This is one point at which issues of assessment are crucially connected to issues of distribution. Distribution systems often care less for quality than for popularity, and they can inadvertently end up legitimating fashions by attempting to capitalise on them. For example, if publishers make a business decision to plough their resources into publishing books to fill market niche X, this removes those resources from publishing books that don’t currently have an obvious market niche. This can attract academics who themselves wish to capitalise on the fad to legitimate their work, in order to compete for other resources themselves.
These sorts of dynamics often lead to concept bubbles (analogous to asset bubbles), in which the popularity of an idea drives its own popularity well beyond its worth, until the point at which the whole thing becomes unsustainable and the bubble bursts. However, just as with financial bubbles, there are always those who cashed out their credibility in time and now have a stable and sustainable share of the academic pie. Once someone has ridden the wave of a concept bubble in order to bypass standard assessment systems and get themselves on the academic ladder, they’re then free to pull up others after them, by becoming part of or developing their own social networks and associated assessment systems. This indicates the extent to which academic cliques and academic fads are not opposed to one another. They’re often just different phases in the life-cycle of the same social dynamic.
As I’ve already mentioned, intellectual movements that are incubated and propagated outside of standard academic channels are not necessarily bad. They’re often absolutely essential. Institutional stagnation often demands a drastic response (let’s remember why Aristotle began the Lyceum). However, there are dangers associated with the very independence that such intellectual movements are often legitimately lauded for (e.g., auto-didacticism often breeds refreshing originality and hopeless confusion in equal measure). When some ideas/practices/institutions become unduly resistant to challenge, this can have the negative effect of making all challenges to them equal in the eyes of those who are disenfranchised by the reigning orthodoxy. This means that well formulated positions that would, undue resistance to them aside, play a worthwhile role within the march of academic progress end up in dubious alliances with challenges that would not. This is doubly negative, insofar as these alliances often prevent the worthwhile ideas developing properly (e.g., they remain in dialogue with positions that are not worthwhile dialogue partners), and insofar as their very association with more dubious ideas reinforces the orthodoxy’s resistance to them.
Here, then, is the moral of this particular story. It is all to easy to see the prevalence of unbound academic fashions as a sort of confirmation of how much better things are within academic networks regulated by systems of assessment. However, this attitude can be prematurely self-congratulatory. It would always have been better for those worthwhile elements of an academic fashion to have been channelled through systems of assessment and thereby directly incorporates into the march of academic progress. The more worthwhile elements are excluded by such systems and forced to wander in the unstructured academic wilderness the worse off academia is. Obviously, such prodigal ideas often return to the mainstream to become mighty pillars of thought, but this is no reason to encourage prodigality. Premature self-congratulation in this context amounts to taking a sign of the failure of one’s assessment mechanisms to integrate worthwhile ideas as a sign of their high standards. To say much more here is to move beyond the sociology and into the psychology of academic cliques, and so I’ll leave it here.
7. To summarise, I think that the internal problems of contemporary academia (as opposed to say, external problems of how academia is funded) are matters of the systems of distribution and systems of assessment of academic work, but that these overlap insofar as the respective systems are often intertwined. In discussing these, I have focused on the problems of assessment, pinpointing and outlining three distinct yet interlocking issues: the quantitative death spiral, network crystallisation, and fashion unbound. Moreover, I have posited that because of the transactional symmetry underlying both distribution and assessment, we are in an unusually strong position to solve these problems through collectivisation. These solutions can involve nothing less than a serious and thoughtful overhaul of the various social structures through which our systems of distribution and assessment are constituted.