The 2018 Academic Freedom Survey Results

After numerous academics made the news in 2016 and 2017 for saying or writing things that got them in trouble, we became curious about how much pressure academics felt, and from where that pressure came.  So we asked them.


     Why are academics under fire for their ideas so often recently?  Why isn't someone standing up for them and their academic freedom?  Why don't they defend each other?  What is everyone so afraid of? Is this really happening at all?


     These are the questions we asked ourselves while recording a podcast in 2017.  We had been discussing what was happening to professors like Jordan B. Peterson and Brett Weinstein. They seemed like they meant well, but they both said something that some people disagreed with, and now they feared for their jobs and turned to the public for some kind of support as their institutions virtually (or in Dr. Weinstein's case, literally) abandoned them.  We wondered, researched, and asked our friends: where is this pressure to be quiet and compliant coming from, and is it all overblown and exaggerated by the news-media? We got a mix of answers, but none were very reliable.  

     So we just asked the academics.

     And they answered.  

     We gathered a list of 8500 public emails of academic staff at major universities across Canada, and we asked them our burning questions.  Here's what they said.


Executive Summary

     Professors and other academics from across Canada were asked anonymously how they felt about a number of issues surrounding academic freedom. For the most part, the majority of these academics felt like they had freedom; however, in most cases, a substantial minority of academics felt like they did not have sufficient freedom.  

Broad questions

  • 80% of academics think that academic tenure should protect an academic regardless of what ideas they pursue.


  • 77% of academics rarely or never feel restrictions on the types of ideas they can pursue in their research/teaching, but 23% do feel such restrictions at least sometimes, with 8% feeling restrictions often or always.


  • 11% of academics have had the quality of their scholarship suffer due to pressure to avoid controversy at least sometimes, with the rest having this happen rarely or never.

  • 84% of academics disagree or strongly disagree that there are some topics which are so sensitive that they should not be discussed on campus.

  • 37% of academics feel like certain popular values in the university cannot be challenged without harm to their career at least sometimes.

  • 17% of academics feel like their tenure rarely or never protects their academic freedom, while 21.4% feel like their tenure does so sometimes.

Specific Questions:

  • 23% of academics feel pressure to avoid controversy from their institution at least sometimes.

  • 25% of academics feel pressure to avoid controversy from their colleagues at least sometimes.

  • 22% of academics feel pressure to avoid controversy from their government at least sometimes.

  • 22 % of academics feel pressure to avoid controversy from their students at least sometimes.


  • 13% of academics have considered how to protect their personal safety from threats related to their scholarship at least sometimes.


  • 39% of academics agree or strongly agree that their students would receive a better education academic freedom was better protected by strong policy.

  • 22% of academics agree or strongly agree that their research would be stronger if their academic freedom was better protected by strong policy.


Background & Rationale

     Academic freedom is probably important for some reason. This is the principle that inspired us to look into its current state. We didn’t have a perfectly clear definition of academic freedom going in, but we were displeased that there was no good assessment of academics’ opinions about it available to be read.

     One can argue what exactly constitutes academic freedom. Some say it must be all-encompassing. Others say there is a point where it must end, for example, when research may uncover uncomfortable truths or facts that can fuel hate. This argument and the authors’ opinions on it are beyond the scope of this inquiry. We simply wanted to look at academic freedom as it is understood by current academics, the current best judge of the concept.


     We hoped that if we asked a variety of questions, with a variety of specificities, we could start to piece together what academics find important, what they feel they have, and what they feel they lack regarding this difficult-to-define concept of academic freedom.


     We knew going in that this research would not yield precise conclusions; we viewed it more as a starting point.  We wanted to start a discussion, and more importantly, make it easier for others to look into this concept by providing some foundation, as unstable as it may be.


     Given the coverage in the popular press of high profile cases of a professor getting stuck between academic freedom and pressures to stop expressing or exploring, we had some general questions that guided this research:

  1. Is this even a problem?

  2. Is this as common an issue as reports make it seem?

  3. How do academics feel about various parts of what constitute academic freedom?


     Our general hypothesis, insofar as there can be one in this type of inquiry, was that only a very small proportion of academics feel like they don’t have enough academic freedom, and that the issue is not as important among academics as it was portrayed to be in the news. We expected less than 5% of faculty members to have significant problems with their academic freedom situation.


     We decided to just start asking questions.  A list of questions that could start to elucidate this problem, if it existed, was assembled.  We ran these questions by a committee of academics for feedback, and then started asking academics across Canada randomly, via email.


The questions can be roughly split into three categories:

  1.  Abstract opinion questions

These are meant to gauge academics’ feelings, as they perceive them.

Eg.  Q.18: "Do you feel free to pursue and idea that interests you in your scholarship?"


   2. Consequence-based questions

These are meant to make the academic think about what they really expect would happen.

Eg. Q.22: "My students would receive a better education if my academic freedom, and that of my peers, was better protected by strong policy.”

   3. Exploratory causal questions

These are meant to try to understand why their feelings arose, or where influence is coming from.

Eg. Q.4: "I feel pressure from my government to avoid controversy."


     This work was out of personal curiosity. It was not funded, endorsed, supported, or facilitated by any university, corporation, or government.



University selection:  The 15 major research institutions in Canada were included.  Research rankings by reported grant funding and publication numbers were used to assemble the list. Therefore, when we say "X% of academics," we technically mean "X% of academics from the 15 biggest research universities in Canada".

Contact list:  A list of academics was created by going through faculty contact pages on university websites, department by department.  We focused on academic staff, and tried to exclude support staff or technical staff. The contact list was then randomized, and batches were selected to receive a personalized email asking them to complete a survey.


Ethics:  No personally identifiable information was solicited. We submitted the study design for evaluation by our university’s ethics board (University of Alberta).  They declined to evaluate the study as it was out of their "jurisdiction", as this research was not formally supported by the university.  They called it "independent special interest". We only contacted people through emails listed and open to the public on their institutional websites, and we took this listing as indicative that it would be appropriate to use the email address to ask academic questions. We asked only academic questions to academics, only through their academic email that was publicly listed on their academic institution’s website.


Survey: Google Forms was used to administer the survey. Responses were anonymous and respondents were allowed to leave a contact email for updates if they wanted to be emailed. New batches of emails were sent until we had enough responses to satisfy our statistical requirements.  


Statistics: We received 358 responses that met inclusion criteria, including consent and appropriate status as an academic. Data reported are with 95% confidence ± 5.1% (error), calculated against a population of 43,000 academics in major Canadian universities, the best population estimate we could get from listed employment numbers. That is, “X% said yes” in this report means "we are 95% sure that the true proportion of academics at major Canadian universities who say "yes" is X ± 5.1%". This was satisfactory to the authors for the questions we had, and we are happy to support others in continuing this research to increase confidence and decrease error, should anyone be interested.


Raw data:  Raw data is accessible to others for verification, audit, and further analysis.  We deleted location and institution information to keep our promise of anonymity. We published the raw data to ensure that those who distrusted our analysis would have an opportunity to use our data to dispute our claims. Transparency is our goal in all aspects of this project, as long as it does not compromise the confidentiality of respondents, who were promised anonymity.


Further research:  We have created the only known list that contains the email addresses (that were published for academic purposes) of academics from major research universities in Canada.  This list contains 8500 emails, and we are happy to make it available for academic purposes on a case by case basis. Contact us and tell us about the research you are conducting.


Full Results



     A significant proportion of academics are having fairly serious academic freedom issues, if this survey is an even somewhat reliable representation of the real world.  Yet most academics (73%) rarely or never hear their colleagues complaining about it. This seems like an interesting disconnect. Perhaps the same factors that cause the feelings around academic freedom also cause the lack of discussion of the problem within academic circles. That would be intuitive. But maybe it’s an entirely different issue. While academic freedom in general can be the subject of criticism from many groups, including governments, students, institutions and other academics, conversations within the academic groups should be immune to most of those, at least in theory.  We should not unfairly minimize the problem; one in four academics do hear complaints from their colleagues on this topic at least sometimes. But you’d think since at least 25% of academics are having at least moderate academic freedom issues, and academics tend to know many colleagues, these discussions would be more popular.


     There is another interesting disconnect between questions 21 and 22.   Academics feel that pressure to avoid controversy is more harmful to their students’ education than it is to their research.  We wonder if this is simply explained by the fact that there are typically more eyes, and generally less-nuanced/expert eyes, on their teaching product than on their research product.  This finding might shift the “blame” of this problem more into the hands of the students, to whatever extent “blame” makes sense in this context.


     Interestingly, only about a third of the people that feel pressure from their institution to avoid controversy actually fear losing their job because of controversy.  This might give us some insight about how the norms are enforced in institutions. There must be other levers to pull than firing.


     Question 19 also places emphasis on the role of politics.  Despite academics’ concerns about freedom, when it comes down to actually choosing research topics, the only factor other than strength of scholarship that they claim to be majorly influenced by is politics (federal, provincial, and municipal). There are a few minor other factors, but politics stands out as the main driving force that influences research topic selection. This is an important question to consider because this survey was largely about how the academics feel, but this question is about what they do, and there is a disconnect. Academics generally feel pressure and restrictions from several sources, but they do not let most things leak significantly into their scholarship, or so they claim.

     This study is too limited in response numbers to do an accurate analysis of the data stratified by interesting categories like academic discipline, research methods, topics of interest, tenure status, etc. Our survey was set up to allow for this, and if the project wasn't shut down prematurely, we would have these insights.  This is part of the reason why we are recommending others pick up where we left off. This could help us put together a more accurate characterization of this problem with a solution-oriented mindset. This of course presumes that you find some of these numbers problematic, which you may not.

Three trends stand out as interesting:

  • Academics whose research/teaching includes the topic of gender are 2.2X more likely to feel pressure from students to avoid controversy than academics whose topics do not include gender.


  • Academics whose research uses non-empirical methods are 2.2X more likely to feel pressure from students to avoid controversy than academics whose research uses empirical methods.


  • Academics whose research/teaching does not include the topic of health are 2.1X more likely to feel pressure from government to avoid controversy than academics whose topics do include health.


Note that these three observations do not have the same statistical strength as the other answers, and are to be take as introductory trends only.


This inquiry was out of pure curiosity.  This is not a political statement.  The authors simply believe that if you are curious about something, you need to ask questions and talk about the answers.


Interpret how you will, examine the data as much as you'd like, and get in touch if you want to discuss. 


     First, let’s acknowledge that this is survey data.  The questions are imperfect, the randomness of who responds is imperfect, and how we interpret the answers is imperfect. Our philosophy is that we should be as open as we can about the problems with research like this so that it does not get taken more seriously than it should.


     In that spirit, we will publish here our own criticisms and all the criticisms we receive from others. To send criticism, contact us and we will publish it. If you have a very long criticism, please host it on your own site (this can be done for free) and send us the link with a <50 word abstract to publish, or email us a PDF document to link to.


Our own criticism:


  1. The stats tell us that the numbers reported should be fairly close to actual value in the population.  But this is a small group of people. Only 385 people responded before Google closed our survey.  So while the percentages we report are a good estimation, there is a fairly good chance that a few of them are wrong.  Let’s look at what a 5% error or 95% confidence level means. When something should go wrong 5% of the time, it is fairly likely to happen once if you repeat it 20 times. We asked 23 questions, and our confidence level is 95%, so we expect that about one of these answers is just plain wrong. That is, one of these answers very likely does not accurately represent the true feelings of the population, even if all wording was completely clear and there were no failures in communications. This isn't a perfectly correct explanation technically, but it's a good rule of thumb. This is true with any study that uses a random sample as a representative of the actual population. It is true here too.

  2. Multiple choice survey questions allow for quantification, but open questions that allow the respondent to describe their answer how they see fit allow for more precision. We chose the former for ease of analysis and communication. It is a quick and dirty estimate that helped us answer the research questions we had. We acknowledge that with this simplicity comes some advantages and disadvantages. It prohibits us from injecting our biases in the analysis because we simply calculate percentages, but it does allow us to inject our biases in the question design, because survey questions can be written to elicit certain responses more reliably than others. We were cognizant of this in the study design, but we know we didn’t do it perfectly. So in many instances, we asked the same sort of questions a couple ways to give us and our readers a method of examining how important the wording was to a certain response.

  3. Response bias is probably an issue here.  It usually is with these types of studies. We had one great advantage in this study: literally every respondent was an immensely educated person with a firm understanding of research, and the tools needed to evaluate research.  The fact that we were asking academics means that this may be one of the more accurately communicated studies ever performed (through no virtue of the authors). But there is a clear problem that likely occurred.  People who felt most strongly were probably more likely to respond than those who felt least strongly. That is, the polarity of responses may be over-represented in this study. Imagine that in real life 20% strongly agreed, 20% agreed, 20% were neutral, 20% disagreed, and 20% strongly disagreed.  We would not be surprised if 50% of the strong opinionated people answered, but only 10% of the others answered. This would create a ratio of 10:2:2:2:10, thus exaggerating how strongly people feel about the issue and falsely suggesting that 38% (10/26) strongly agree. This probably happened here to some extent. 

  4. In creating our list of emails of academics to contact, we wanted to only use email addresses that were listed publicly for academic purposes, and could only obtain emails that were simply listed online. Some universities have certain faculties/departments’ contact info unlisted or masked and thus we did not contact them.  This may have skewed the results in some unknown direction. Some of this can be mitigated by looking at the stratified data, but the numbers once stratified are so small that we cannot make strong conclusions from them.


The criticisms others have sent in: 

--Note: We publish every criticism we receive. Try it if you want--

  1. A few variations of our criticisms that are already fully captured above.

  2. This was not approved by an ethics board.  Authors’ clarification: This is true, because the ethics board to which this study design was submitted before beginning refused to examine the study as it was out of their jurisdiction and constituted an "independent special interest". This was appealed and lost.)

  3. These questions should not be asked (no reason given; several occurrences).

  4. The authors are empirical, quantitative scientists and therefore are not qualified to ask these questions without going through a non-empirical, qualitative researcher.

  5. The authors clearly biased the questions to maximize the perception of an academic freedom problem (few).

  6. The authors clearly biased the questions to minimize the perception of an academic freedom problem (few).

  7. The authors did not give enough choices on the questions and my specific academic freedom issue is not represented (few).

  8. I’m not answering as you did not address me as “Professor Redacted”.

  9. Thank you for doing this (many).

  10. This study "violates human dignity".

(updated 14 May 2019)

More to be added here as they come in...


Future Directions

     This is just a starting point.  We need more questions asked, and more responses collected.  And this work needs to evolve into the practical realm.  If we acknowledge that there is a perceived problem, is there an appetite to fix it? Are academics motivated to work on changes?  

     After this study was shut down by Google, we realized that this was a battle we did not have the resources to win at this point.  We encourage researchers who want to look at this issue more thoroughly to get in touch so we can transfer our lists, data, and tools, and make the next steps even easier.

     And finally, we will be working on making research ethics review more accessible as a public good.  Get in touch if you want to be involved in that effort.


For Press and Media

The authors will provide commentary and clarification to interested parties.  You can start that process by getting in touch here.

Our figures and stats may be republished with attribution.