Availability bias is a human cognitive bias that causes people to overestimate the probability of events associated with memorable or vivid occurrences. Because memorable events are further magnified by coverage in the media, the bias is compounded on the societal level. Two prominent examples would be estimations of how likely plane accidents are to occur and how often children are abducted. Both events are quite rare, but the vast majority of the population believes that they are more common than they are and behaves accordingly.
In reality, people are much more likely to die from an auto accident than a plane accident, and children are more likely to die in an accident than get abducted. The majority of people think the reverse is true, however, because the less likely events are more "available" — more memorable. Looking at the literature or even just the interactions of daily life will reveal thousands of examples of availability bias in action.
Availability bias is at the root of many other human biases and culture-level effects. For instance, medieval medicine was probably barely more effective than leaving a malady alone to heal on its own, but because the times where the therapy "worked" are more available in the minds of many, practicing medicine was generally considered effective whether or not it really was.
The study of this bias was pioneered by psychologists Amos Tversky and Daniel Kahneman, who founded the field of "heuristics and biases" and developed a model called prospect theory to explain systematic bias in human decision-making. Kahneman subsequently won the 2002 Nobel Prize in Economics for his work, despite having never taken an economics class. Tversky, his long-time partner in the research of heuristics and biases, died in 1996.
A concept intimately connected to availability bias is that of base-rate neglect. Base-rate neglect refers to integrating irrelevant information into a probability judgment, biasing it from the natural base rate. An example would be letting someone into a college just based on an interview, when empirical studies have shown that past performance and grades are the best possible indicator of future performance, and that interviews merely cloud the assessment. Because people like "seeing things for themselves," however, the interviews are likely to continue to take place, even in the absence of any support for their effectiveness.