Archive | Academia RSS for this section

Drugs to make you smart

For as long as it has existed, the human race has strived to make itself better, to improve upon its natural ability and to push its boundaries. The Olympic games display impressive feats of human physical endurance, strength and skill. The Guinness book of world records celebrates some of the more ‘niche’ (yet no less impressive) human abilities, such as holding 43 snails on a face at once and squirting milk over impressive distances from an eyeball. These may not be particularly useful skills to have, but the collection of records in the Guinness book is still a demonstration of how we endeavour for success, to improve, and to be the best.

So what about our brains? Can we make them better, faster, smarter?

There are a group of drugs known as ‘Nootropics,’ or ‘Cognitive Enhancers’ that are used for just this purpose. These drugs act by changing the regulation of signalling systems within the brain – that is, they alter how brain cells communicate with each other, thereby subtly altering brain function. A cognitive enhancer aims to improve cognition – this is the brain’s ability to think, make decisions, learn, remember and solve problems. All are essential abilities for living independently, holding down a job, and succeeding at school.

SteroidBrain1

Various cognitive enhancers have been around for decades, and new ones are being developed all the time. However, these drugs are created with an aim to treating psychiatric and neurological problems, where poor cognition is a symptom or a side effect of the illness.  But do they work on healthy brains? Can we use drugs to push our cognitive abilities above and beyond our usual boundaries?

Can we make ourselves cleverer – *ahem* – I mean, more clever?

Well, it may be possible, although it’s not all that clear, and definitely not that simple. The use of cognitive enhancers in a healthy population is a relatively new consideration, so research into the effect of these drugs is very much in its early stages, and there is almost no data on the long-term effects of regularly taking such enhancers.

It’s also important to bear in mind that there is not one ‘wonder pill’ that can make someone smarter. Rather, there are many many different drugs that affect slightly different, and overlapping systems in the brain. Each one may therefore improve different and particular elements of cognition, which in turn means an individual is more able to learn, and as a result will be smarter. For example, different drugs will improve attention, others will affect memory, and others will increase alertness. By taking a cognitive enhancer, you will not suddenly be able to answer all of the questions on University Challenge.

So here is a summary of some of the most common cognitive enhancers currently being used:

ATTENTION (Ritalin/Atomexitine)

One of the illnesses that is most commonly treated with cognitive enhancers is ADHD (attention deficit hyperactivity disorder), which is characterised by a short attention span, hyperactivity and impulsiveness. Therefore cognitive enhancing drugs are thought to be a useful treatment. Two of the most common drugs used for ADHD are Ritalin (Methyphenidate) and Atomexitine.

Ritalin and Atomoxetine both increase noradrenaline and dopamine in the brain. Noradrenaline and dopamine are chemicals that send signals between brain cells, and are therefore known as ‘neurotransmitters.’ In ADHD, there is a reduction of both of these neurotransmitters, suggesting that communication within the brain is not efficient. By increasing the level of these neurotransmitters, Ritalin and Atomoxetine improve communication between brain cells, resulting in better alertness and attention.

So what about in a healthy individual without ADHD? Can these drugs further enhance cognition above and beyond what can be achieved with hard work alone? Many people believe so – ADHD-associated drugs are commonly found on university campuses, particularly in the USA when they have been illegally purchased by desperate students trying to improve their exam performance. But does it work?

The answer isn’t exactly clear. Several studies have indicated that taking Ritalin and Atomexitine can be beneficial in healthy adults – they can increase accuracy and performance on various cognitive-dependent tasks that require good attention and memory to perform well. But the size of the effect seems to be fairly modest, and seems to depend on the individual’s natural ability in the first place – those who had poor attention and memory to begin with saw an improvement in their performance after taking Ritalin, but there was no benefit for those who already performed well. Another dopamine enhancer, Bromocriptine (used for the treatment of Parkinson’s disease) actually lowered the performance of individuals who had initially performed well.

 WonderPill1

MEMORY (Aricept)

Cognitive enhancing drugs are also commonly used in those suffering from neurodegenerative diseases, such as Alzheimer’s or Parkinson’s disease. Both affect cognition and memory, and while there is no cure for either condition, cognitive enhancing drugs may delay or slow down the progression of the cognitive symptoms. Aricept (Donepezil) is commonly used for Alzheimer’s disease. It increases the levels of another neurotransmitter – acetylcholine – by stopping it being broken down and recycled in the brain

The majority of people experience and complain of a bad memory as they get older, so a memory-enhancing drug such as Aricept is going to be of interest to many people, not just those suffering with dementia. As such, research has begun to investigate how Aricept may affect healthy individuals. It has been found to enhance pilot performance after flight simulation training, although a review of multiple studies looking at Aricept found that evidence for its ability to enhance memory was unconvincing, with several studies finding no effect or even an impairment on cognitive ability following treatment.

ALERTNESS (Modafinil)

Modafinil (Provafil) is a treatment for narcolepsy and sleep apnea, and enhances cognition by increasing alertness and wakefulness. Apparently it is commonly used by some individuals in high-stress professions, or those that require long hours and shift work to help them stay awake,  such as doctors, military personnel, and academics. A study of British universities indicated that its use is pretty high among the undergraduate population too, and in 2013 described it as the ‘drug du jour’ to aid studying, despite being a prescription-only medication. The exact way that Modafinil affects the brain isn’t fully understood – it is thought that it may alter similar systems as Ritalin and Atomexitine, but it has also been associated with multiple other neurotransmitters and systems.

Modafinil has been shown to improve cognitive function in male volunteers by enhancing alertness and attention paid to the tasks they were given, and by inhibiting quick, impulsive responses. The volunteers also reported feeling more alert and energetic after taking the drug. However, other studies have found that similar to the ADHD drugs, Modafinil has a greater effect on those with a poor initial performance, and may be of limited use to those with a high cognitive ability.

Universitychallengedrugs1

Overall, there is some tentative evidence that cognitive enhancing drugs typically used for neurological problems could have some benefit in the healthy population. So what should stop you from grabbing a big ol’ box of pills to improve your performance at school or work? Well, lots of things, actually – here’s just a few:

  • A big deal is that no one knows the long term effects of taking any of these substances – the majority of studies investigate the effects of a single dose or treatment over just a few weeks.
  • It is important to also bear in mind that these drugs are likely to have undesirable side effects. When they are used in people with a neurological disease, it has been thought that the therapeutic effect of the drug outweighs the discomfort of the side effects. As the current evidence points to very modest effects in healthy people, the balance between the benefit and the risks may no longer be favourable.
  • It isn’t really understood how they work in healthy people, and it is different for everyone. Most of the studies I came across while researching this post pointed out that the effects of cognitive enhancers were very variable between different people. This may be down to an individual’s brain chemistry, gender or their genetics, but currently it isn’t possible to predict how or if a cognitive enhancer will work in any one person.
  • There’s a big ethical debate about whether the use of cognitive enhancers is ok. Is it cheating? How different is it to using caffeine? Would people feel coerced or pressured to take them in order to ‘keep up?’ Does it undermine the value of hard work?

I don’t know the answers to the ethical questions, and the argument is strong for both sides. Nevertheless, the use of cognitive enhancers is an interesting and divisive debate, and looks to be a growing field of research. However, the current consensus appears to be that these may be useful tools in the future for healthy individuals, but at the moment their benefits and effects are questionable.

Personally, I’m happy to continue to celebrate if/when I manage to answer just a single question on that darn University Challenge.

The Biocheminist.

Imposter!!

For a huge portion of my PhD I felt like an imposter. One day, someone was going to figure out that they made a huge mistake – that I shouldn’t have been put on the course and I was going to get thrown out any day soon. That despite no one showing any concerns, I clearly I wasn’t up to the job.

These thoughts and feelings have a name – it is called ‘Imposter Syndrome.’ It’s apparently widespread throughout academia (although no one talks about it) as well as in other professions and it is more commonly reported in women.

A few months ago – when no longer entirely in the throes of imposter-style thinking, I was called out on it. Imposter syndrome came up in a conversation, and my supervisor looked at me and casually said ‘oh yeah, you have that’ and then the conversation carried on.

Oh no! My secret! It was known all along!

When I decided to write this post, I naively thought that surely, there wouldn’t be much written about this affliction – but I was wrong! The internet is full of posts detailing imposter syndrome within and outside of academia, and the reasons why it may be more common in women.

So instead I thought I’d write about how, on reflection, I believe I came to feel like I was an imposter (and how I’m getting over it) in the hope another early-career researcher will read it, realise they aren’t the only one feeling that way, and can build up their own confidence and move on. And if that encourages someone to stick with research when they feel like they should quit, then hurrah!

 

  1. You don’t know what you know

My undergraduate degree was in Psychology – while it contained some modules on basic neuroscience, it was a world away from the biochemistry/cell biology PhD that I went on to do. For much of my first year I was coasting on barely remembered snippets of A-level biology – I was lacking the absolute fundamental basic knowledge and I had to catch up quick. I also tended to keep the fact that I did psychology quiet, as many ‘proper’ biochemists still see it as a bit of a joke subject with no scientific merit. This did little to aid my confidence and to convince me that I should be there!

When I started my PhD, my supervisor told me something that has stuck in my head, and I have retold it numerous times to other students when they needed some reassurance. It is that:

One of the most difficult things about doing your PhD is learning the difference between what YOU don’t know, and what NO ONE knows.

It is incredibly accurate and describes the stages of my PhD pretty well – in the first year I didn’t know anything (see above!), but I thought that everyone else knew everything. It felt like everyone else was privy to all this information that I just didn’t have, and that made me feel a little excluded, that I was tagging on and just ‘faking it’ to be a part of the group like everyone else.  But, the learning curve was steep and by the second year I was now aware of what NO ONE knows – in both my office and in the scientific community. The difference between ‘them’ and ‘me’ got smaller, and accordingly so did the feeling of being an imposter. By the third year, I knew how to do things no one else in the office knew, and from doing my research I now knew things that no one else in the world knew. And that is pretty freakin’ sweet.

 Kat Kartoon2

  1. You don’t get graded and you get less feedback

I’m a nerd. Always have been. I have always tried to get the best grade that I can – I shot for the ‘10/10s’, ‘A’s, the ‘A*’s and the ‘first class’ – and more often than not, I got them. Academia is somewhat different. And it took me a while to get used to.

There are no grades, and typically there isn’t a great deal of positive feedback either – every experiment doesn’t get marked, each report doesn’t get a score. This was a huge adjustment for me. Coming into my PhD, I was used to having constant feedback on my work – constant reports, coursework, essays and exams were all regularly graded and given back so that I  knew if I was on the right track or not. Without regular reports and grades, I had no idea if what I was doing was correct or good enough – and as I tend to veer towards a pessimistic personality, I could easily convince myself that I was doing everything wrong and didn’t deserve to be there.

The feedback that you do receive in academia can actually be overwhelmingly negative – this is because in order to ensure the highest quality work is being done, anyone reviewing that work has to be highly critical, meaning they are more likely to pick out your errors and mistakes or tell you that your theory is wrong than say ‘wow this is great – A!’ If that reviewer is in a bad mood (or just a mean person that relishes creating student misery), then you also can’t guarantee that the feedback will even necessarily be constructive.

This combination of the removal of active positive feedback and the more frequent occurrence of negative feedback is perfect for breeding insecurity, and consequently feelings of being an imposter. Now, I’m not saying that PhD students should be coddled and told how great they are – the point is to push them and train them up to be confident, independent researchers that can stand their ground and produce the highest quality work possible, so constant hand-holding and reassurance would be damaging. But a little warning that things were gonna be different would have helped!

I’ve probably found this contribution to Imposter Syndrome one of the hardest to tackle – time and experience have proven to be the best solutions. With the more work that I do and the more successful experiments that I’ve run, I have become much better at self-reassurance (although there is the occasional wobble). I have also come to the conclusion that if no one is saying anything to you about what you’re doing, then it’s probably alright – a big deal will only be made about it if you’re doing something wrong! So carry on in the knowledge that you’re doing just fine!

 

  1. The Academia bubble

I am the only member of my family to have completed an undergraduate degree, so doing a PhD has been a pretty big deal. My friends also seem rather impressed by this accomplishment. And getting a PhD IS a big deal! It’s hard! Not a lot of people do it!

All of this is forgotten upon entering the academia bubble.

Kat Kartoon bubble

 

In this bubble, everyone has a PhD. It’s a totally normal thing. In fact it’s an essential requirement for an academic career.  You are no longer the top of your class, no longer the best of a bunch of interviewees – you are bottom of the food chain! Bottom rung of the ladder! In these circumstances, it’s easy to forget that getting a place on a PhD course is an excellent accomplishment, never mind completing the darn thing! In this environment, a new, insecure researcher can lose sight of their ability, talent and worth, while trying to do their best in the shadow of the post-docs, fellows and professors above them.

The solution to this issue? Get out of the bubble and into the real world whenever possible! I particularly enjoy giving my title as ‘Dr’ whenever I’m asked whether I’m a ‘Miss or Mrs?’ As well as being quite entertaining, it’s a nice little boost to the ego. And that confidence is essential to be an independent researcher and keep on going in the face of failed experiments and harsh criticism.

 

  1. My brain

Finally, I believe that one of the biggest contributing factors to my feeling like an imposter is not the fault of the system around me, how people have treated me or even necessarily in reality or logic. It’s rooted in how I think and interpret things. As I have already mentioned, I can typically be pretty pessimistic and negative – this shapes how I’ve dealt with all the points I’ve described above. When I first started in science, I would interpret any criticism (or even lack of!) as an indication of failure and proof that I wasn’t up to scratch. A more optimistic or naturally confident person perhaps wouldn’t struggle with the removal of feedback, wouldn’t be phased by bubbles and would worry less about comparing themselves with other people in the lab (those students DO exist and I find them creepy!).

While my pessimism and negativity hasn’t generally been helpful, the belief that I wasn’t good enough has pushed me forwards to be better. I discovered a ‘screw you – watch this!’ attitude when I came up against individuals who didn’t respect my work. People started to come to me for help and advice.

And eventually I realised that actually, I am good. I am really very good at what I do (and with my British sensibilities that’s a difficult statement for me to post on the internet!).

My negativity and the pressure I put on myself to do and be the best can still sometimes damage the occasional evening (and endlessly annoy my poor husband), but it no longer dictates how I feel about working in research or whether I deserve to be there.

There’s no magic solution for getting rid of Imposter Syndrome – I found that time, experience and allowing myself to indulge in some positive thinking, ‘letting the haters hate’ and working hard pulled me out of it. Recently I was speaking to a colleague in the lab who was recently given a new contract, and she told me she was waiting for them to realise they had made a mistake – so it wasn’t just me! Knowing that countless other people have experienced the same thing gives it much less power, and makes it a much less individual and personal experience.

 

Below are some links to other sites discussing Imposter Syndrome in more detail and how to deal with it:

https://counseling.caltech.edu/general/InfoandResources/Impostor

http://www.apa.org/gradpsych/2013/11/fraud.aspx

http://shriverreport.org/10-ways-to-overcome-impostor-syndrome-joyce-roche/

http://www.huffingtonpost.com/caroline-dowdhiggins/impostor-syndrome_b_1651762.html

 

The Biocheminist

10 Tips For Writing Your Thesis*

*or any other preposterously long document

It’s almost been exactly one year since I had my viva (thesis defence) and earned my doctorate. Three years of hard work needed to be condensed, made sense of, and weaved into some kind of coherent dialogue to be presented to – and sometimes torn apart by – senior researchers and professors.

Writing a thesis isn’t easy. In the UK, there is no minimum word count but the maximum tends to be around 80,000 words – which gives an idea of the size that these things can get to (although typically the average word count for a scientific thesis is about half of that). So it’s no easy feat. But, it can be done! And it can be done without too much stress, too many tears or total social isolation.

For this post, I have decided to share my top 10 tips for writing a thesis, which have come directly from my experience of ‘writing up’ last summer – including both my successes and the power of hindsight following my failures. I hope you find them helpful!

  1. DON’T PANIC!!!

When you first start your PhD, the idea of writing a thesis may be utterly terrifying and completely alien to you. There will likely be a whole stack of them around the lab from students-past filled with nonsensical scientific language, endless experiments, graphs and figures. And all those references!!

‘How the hell will I ever write one of those?’ A book! A whole freakin’ book!?!’

Before you panic or pretend that it isn’t happening until a few months before your submission deadline, remind yourself – it isn’t that bad. First of all, yes it’s an entire book. But, the requirements for submission usually ask for at least size 12 typeface, double spacing, one sided printing and massive margins to allow for neat binding and easy reading. So all of a sudden one page of typed text actually becomes three. Now imagine all those massive books only one third of their size, and it all becomes a bit more manageable.

I also mentioned the word count – but don’t worry about it. It shouldn’t be a concern. Unlike pizza and doughnuts, bigger does not mean better. My behemoth of a thesis was 73,500 words and was BIG. Much bigger than those submitted by my peers, and it came under waves of criticism before even being opened and read. As it was, my examiners agreed its size was justified *phew*, however no one is going to be impressed just because you’ve written more. They will just be annoyed that they have to read it and carry it around. It’s also likely that in a massive thesis, the writing hasn’t been done concisely – therefore making it particularly unenjoyable to read and detracting from all your marvellous work.

bigbetter

  1. START EARLY

It’s easy to think in your first year that a 3-4 year deadline is far enough away to ignore for a couple of years. And actually yeah, it is. A thesis can be written in a few months if given your solid attention. But I don’t recommend it, and when you can lay the groundwork with relatively little effort early on, why not reduce the load (and stress and panic) later?  Try to get in to the habit of making a final figure or graph or image whenever you finish any individual experiment, and collate these figures into a single, easy to find document. I found when writing up, what took up most of my time was fishing out the old data I knew I had somewhere, then making it look respectable and presentable rather than being a half-arsed unlabelled multi-coloured graph lounging at the bottom of a spreadsheet. It’s also easy to get started early on your methods section – whenever you do a new experiment or use a new technique, just type all the details out including where all your equipment and reagents were from. It takes practically no brain power and will save you digging through multiple lab books and a frenzied dash around the lab finding out where you purchased everything. If you can keep up to date with doing these things, then you will save so much time when you come to actually write up your work.

  1. WRITE YOUR RESULTS FIRST

Your results are the easy bit, and a huge bulk of your thesis. I recommend writing them first because – like the methods – they take less brain power or effort to write. Of course you need to put some thought into how you are going to present and order your results, but the actual paragraphs are just describing what you did and then what happened – the more-difficult-to-write reasons why you did it and what it means are reserved for the introduction and discussion sections. If you have an empty page and just need to get something started but your mind is blank or overwhelmed – start with the results. Formally writing out everything you’ve done might also give you a fresh perspective on your work and help you form a good discussion section. Once you’ve got something on the page, it’s much easier to do the rest.

  1. BREAK IT DOWN

This is a particularly useful thing to do for the text-heavy introduction and discussion chapters, but is also beneficial for planning out every single part of your thesis. I mean this as a way of breaking down large sections of text into smaller, more manageable chunks. For example, the introduction chapter can be the most daunting – this is where you need to summarise an entire field of research relevant to your PhD project and introduce the important themes. This was the last section that I wrote and I put it off for a long time. However, by planning in advance what I wanted to write about in each section, I could ignore the chapter as a whole and concentrate on a section of a few hundred words at a time. This doesn’t seem so bad! I completed a lot of writing without really noticing this way, then I could go back and link or re-order the sections as necessary.

  1. WORK WHERE YOU ARE COMFORTABLE

Everyone will prefer to write in different places – don’t let this influence where you do your writing. Many of my colleagues preferred to write in the office because that is their working environment and it put them in the right mind-set. Others prefer the silence that a library can provide. Personally, I found the office too loud and the library too quiet, and instead preferred to slump somewhere at home in elasticated trousers with CSI quietly on in the background. But that’s just what works for me. If other students are putting in 12 hour days in the office or pulling all-nighters at the library (madness!), don’t feel you have to do the same to ‘prove’ that you are working just as hard if that isn’t what works best for you. Equally, if everyone else is working from home but you need the office environment to concentrate, then don’t feel like a loser for going in to write.

 IMG_1049

  1. TAKE REGULAR BREAKS AND MAKE A TIMETABLE

I’m a sucker for a good plan. From GCSE exams up to revision for undergraduate final year exams and for writing my thesis, I have planned when I’m going to do which bits of work. This has several advantages. First of all, like many of my previous suggestions, it breaks the work down into more manageable chunks. For example, if you have planned out your paragraphs and sections, you can then put them into a timetable to plan when you will write which ones – maybe you will have time to tackle 5 sections a day over several weeks, or if you’ve left it late then maybe it will be 15 sections a day in a lot less time! Whatever the case, it gives you a goal to work towards. Another advantage of doing this is if planned well, you can avoid working those ridiculously long hours and all-nighters that we all hear horror stories about. And if free time is expected and planned, then you won’t feel guilty for not working! Bonus number 3! You can even extend this timetable to incorporate which hours of the day you work best – there’s no point strictly telling yourself you will work 9-5 if you never really get going until 11am. So have a lie in! Work from 11-7! Make better use of the hours where you work most efficiently. Advantage no.4 – making this timetable is an excellent little bit of procrastination before you really get going with writing.

Taking regular breaks is also important – you can treat them as little rewards for each section you complete, or as an opportunity to move around a bit or think about something else for a while. This will stop the work becoming too monotonous or tiring, and will actually mean you can concentrate better while you are writing. The length of working time between breaks and the length of break is up to you, but be sensible! If I was writing a section I found particularly difficult or boring, I would take a 5 minute break for roughly every 15 minutes of work – just by checking Facebook or something. If I was on a roll I just kept going until that roll unwound, then I would reward myself with some kind of cake. Or if I did really well, maybe I’d go change out of my pyjamas into proper clothes.

  1. NEVER UNDERESTIMATE FORMATTING & EDITING TIME

This, and my tip about starting early, are both born from wonderful hindsight. Formatting was the only thing that led me into the nightmarish realms of 3am thesis writing. Make your figures to the correct scale in the first instance – I wasted days re-jigging figures I made to the wrong scale that wouldn’t fit sensibly on an A4 page! But even if you do this right, fiddling about with the best placement of figures and text will take longer than you anticipate.

Check your university guidelines on thesis presentation before you start writing. You will need to consider the numbering of headings and subheadings, the preferred reference format (both in text and in the bibliography), page numbering, indexing, appendices… the list goes on. While it’s tempting to do all the writing then deal with these things at the end, on a document as large as a thesis, that can cost you a lot of time and sleep. Set those things up first, then be super smug when someone only bothers to read the guidelines the day before submission.

 

  1. USE A REFERENCE MANAGER

People still write without using a reference manager, and to me that seems insane. A reference manager stores the records of the manuscripts, papers and book chapters that you read, and works with word processors so that you can ‘insert’ the reference you need into your text, then it will automatically generate a bibliography from the inserted references. This means you don’t need to manually go through and check your references, then type them all out then re-order or re-number them whenever you make any edits. You really should use one. There are several different ones out there, and they tend to be free or have free versions. A reference manager will also give you the option of different formats to present your references in – it’s advisable you choose the one that matches with your university guidelines from the start because, although automated, trying to change the format of several hundred references in a few-tens-of-thousands of words document may put your computer out of action for a while.

  1. GET FRESH EYES

Either your own or someone else’s! After looking at the same bit of writing over and over again, you won’t see your mistakes. You will know what you mean and what you wanted to say, so your brain will ignore any spelling mistakes or grammatical errors. The only way to get around this is to do something else for a while then come back to it. When I say a while, I mean at least a day! Write something, do some edits on it, then leave it and work on another section. When you come back to reading the original section, it should be a bit less familiar and any mistakes a bit more obvious. Of course what’s even better is if you can get someone else to read it for you! It is particularly useful if you can get someone who doesn’t work on the same thing as you to read it – someone working on the same thing as you will have similar knowledge and make the same assumptions, so may not notice any errors because they ‘know what you mean.’ Someone unfamiliar will notice the bits that don’t make sense or that you haven’t fully explained. Bear in mind, it takes either great friendship or great coercion to get someone unrelated to your project to read any of your thesis.

  1. DON’T PANIC!!

A reiteration of my first point! But I am now referring to the end of the writing process rather than the beginning of a PhD. You will have made mistakes and there will be errors. While a thesis full of spelling mistakes and sloppy writing gives a bad impression, the occasional misspelt word is no cause for concern – so don’t worry if you spot a few after submission. The examiners know you’re human, and it’s entirely possible (and probable) they won’t notice many of these small mistakes anyway – and if they do they are just ‘minor corrections.’ I missed out my entire index of figures in my submitted version, but it’s not a sticking point in a viva!  Stay relaxed while you’re writing, have a plan, and it’ll be fine. In the New Year, I’ll post some tips on how to survive the dreaded viva itself…!

IMG_1052

 Did you find any of these suggestions useful? Have you got any tips for writing a thesis? Let me know in the comments below!

 

The Biocheminist

Growing cells – it’s a culture thing

Growing cells – known as Cell Culture – is a fundamental process carried out in most biochemistry research labs. Having a never-ending supply of cells available is a valuable resource for researchers. It allows us to manipulate cells and investigate the effects of new drugs in a way that would be impossible, expensive and unethical to do in animal models or in people. They also provide a consistent and plentiful source of material to perform lots of experiments in a relatively short period of time.

There are hundreds of different types of cells, referred to as ‘cell lines’, which come from different parts of the body, different species, and are created in different ways.

OLYMPUS DIGITAL CAMERA

Some neuronal cells growing in a dish

Broadly, there are 2 categories of cell line:

  1. Primary

These cells are taken directly from a piece of tissue, and have a finite lifespan. They will not continue to grow and divide, so are used in short-term experiments.

  1. Continuous

Continuous cells have originated from a piece of tissue, but they have been transformed in the lab so that they continue to grow and divide indefinitely. These are often referred to as ‘immortalised’ cells. The most famous and most common cell line is known as HeLa, which originated from a biopsy of an extremely aggressive case of cervical cancer. HeLa cells are a particular oddity as they appear to have transformed themselves without any manipulation in the lab. HeLa cells have a complicated and controversial history relating to medical and research ethics – to find out more about them, I would highly recommend reading ‘The Immortal Life of Henrietta Lacks’ by Rebecca Skloot (don’t worry, it’s not too sciencey!).

So what do you need to grow cells?

All cell lines are different and may have specific needs, but the basics are the same. Cells are grown in a nutrient-rich liquid referred to as ‘media.’ Media helps stabilise cells and provides the essential nutrients required for cells to grow.

In loose terms, growing cells isn’t too dissimilar to growing a human baby – it’s a case of food in and waste out, and some care in between to make sure they don’t get sick. It’s also beneficial to avoid dropping them on the floor. Media therefore commonly contains the following:

IMG_1043

Growing cells just like growing babies….kinda

Glucose: Provides energy to the cells,

Glutamine: An amino acid that acts as an extra energy source

Phenol Red: A pH indicator, which changes colour if the acidity of a solution changes. Cell culture media is commonly a reddish-pink colour because of the phenol red, but if the culture becomes too acidic, perhaps by cell overgrowth, infection or an accumulation of waste, then the media will turn a gross yellowish colour so it is easy to see when something is wrong. Media needs to be removed and replaced regularly, as the cells will use up energy and consequently produce waste, which is toxic to the cells if it builds up.

Antibiotics: To help prevent any unwanted infections.

Serum: The remaining component of blood after clotting and the removal of any remaining blood cells. The most common serum used in cell culture is fetal bovine serum (from cow fetuses), referred to as ‘FBS,’ and is a by-product of slaughterhouses for the meat industry. Serum is essential in cell culture because it provides all of the components normally present in the body that helps cells to grow and survive, such as proteins, carbohydrates, hormones and vitamins.

Sadly, there is no additive to correct researcher clumsiness.

10492069_10152673986479110_7751108416114478575_n

Cell culture in action using media containing phenol red

But it’s still not quite as straightforward as feeding and cleaning!

Cells have to be cultured in special sterile conditions – because the cells are no longer growing in a complicated system made up of hundreds of different cell types and a functional immune system, they have no protection against infection. The addition of antibiotics to the media helps protect against bacterial infection, but they are no substitute for proper sterile technique!

Sterile technique involves using a special cabinet (or hood) that has a particular flow of air. Air is sucked into the cabinet and passed through a filter to get rid of any nasties before reaching the area containing the cells. Used air is extracted from the cabinet and disposed of elsewhere. Everything that enters the hood is sprayed with ethanol, and all of the equipment, such as pipettes and tubes, are always certified as sterile by the manufacturer and are only ever used once to prevent any potential contamination.

Cells must also be grown in special incubators that carefully regulate their environment – the majority of cells will grow best at 37˚C (body temperature – what a coincidence!), with some humidity and 5% carbon dioxide in the air, which helps maintain the correct pH.

What happens once you have a batch of cells happily growing?

They grow some more!

IMG_1044

Happy cells are growing & dividing cells

Continuous cells will carry on dividing and growing – they will run out of space and nutrients, so will eventually poison themselves and starve if left to their own devices. This means that cells need to be regularly ‘split’ (officially called ‘passaging’) – this simply means that the cells in one flask or dish will be split up into several other flasks or dishes to continue to grow with more space and more nutrients. This method means that cells can quickly be bulked up into huge numbers and can then be prepared and used for various experiments.

I’ve spent the majority of my fledgling research career doing cell culture, so I’m bound to be biased, but I think it’s pretty awesome.

 

If you have any questions about cell culture, feel free to ask in the comments section below, and let me know if you have any other biochemistry or neuroscience questions you’d like answered! You can also follow me on Twitter @TheBiocheminist

The Biocheminist

Cat-calling and Mental Health

It would be difficult to find anyone who hasn’t at least heard about, if not watched, the now viral New York street harassment video (if you haven’t seen it, you can watch it here).

It summarises an all too familiar experience that most women have faced at least once in their lives – and I mean MOST – as a staggering 98% of women surveyed in 2008 reported that they had experienced cat-calling and harassment.  The video has caused an intense internet debate; as well as the majority outcry condemning the behaviour of the cat-callers and demands to change this all-too-common occurrence, there have also been more negative responses including the defence of the men involved and violent threats directed towards the subject of the video.

While a lot of the debate has centred on the acceptability and frequency of these behaviours, and how it can best be tackled, less attention has been given to the psychological effects of experiencing cat-calling and sexual harassment, and their impact on mental health.

So I did some digging. 

While there is a wealth of scientific literature investigating the effects of sexual harassment at home or in the workplace on mental health, the investigation of the effects of street harassment or cat-calling (referred to in these studies as ‘stranger harassment’) is a relatively new development. This came as a surprise to me, as there are studies that date as far back as 1978 that found that women felt unsafe in a variety of social contexts, and a Canadian study in 2000 identified that stranger harassment greatly reduced feelings of safety to a larger degree than harassment by known acquaintances. To put more simply, harassment by strangers makes women feel even less safe and more scared than harassment by a known individual at work or at home.

Sexual harassment has been associated with nausea, sleeplessness, anxiety and depression. However, the literature focuses on two main components that may affect mental health:

  1. Stress

Arguably the main risk of stranger harassment to mental health is its effect as a chronic stressor  – a stressor can be any environmental or external event  that causes stress to an individual, which becomes chronic when it is experienced on multiple occasions over time. For example, an individual may receive one cat-call on their walk to work. In isolation, this could be an unpleasant and mildly stressful event, or may not have any bearing on that person’s day. However,  should that experience of a mild stressor occur every day for months or years, then it becomes a chronic source of stress that can negatively impact mental health.

How does stress affect mental health?

One of the most studied outcomes of chronic stress is depression (which is also one of the reported outcomes of harassment). In fact, a popular mouse model of depression is called the ‘Chronic Unexpected Stress’ (CUS) model, which is created by exposing mice to…well…chronic unexpected stress. This includes social stress, (such as overcrowding or isolation) and predatory stress (the scent or presence of a predator). This is such a popular model for depression, because chronic psychological stress effectively and predictively causes anxiety and depression-like behaviours in these mice.

Predatory stress increased inflammation in several brain areas in these mice – inflammation is the body’s response to threat, and in the short term protects cells from harm. However if inflammation is present for a long time, it can start to cause damage. Increased inflammation in the brain has been found in, and may exacerbate Alzheimer’s disease and depression.  Studies in humans have also identified damage to the structure and communication networks of the brain as a result of chronic stress, which can have a negative effect on learning, memory and mood.

So it isn’t really such a leap to imagine that the fear or threat felt following harassment, and the powerlessness over its occurrence could become a chronic stressor. It can also arguably be equated with the ‘predatory stress’ used in mice.  In a study that focused on the workplace, an association between harassment and poor mental health was identified. Specifically, individuals who experienced sexual harassment early on in their careers were more likely to be depressed later in life. This was the case for both men and women.

  1. Objectification

Objectification is a societal issue that reaches beyond just cat-calling, but its role in stranger harassment has been investigated. The theory of self-objectification in the psychological literature says that when a person is sexually harassed by a stranger, they feel objectified. This causes ‘self-surveillance,’ or for them to view themselves as the stranger views them. This is usually as a sexualised object, with their worth determined by how they feel they are viewed by others. In other words, they are ‘self-objectifying themselves. This self-objectification has been found to have multiple negative effects on mental health, and has been associated with increased prevalence of eating disorders, depression and substance abuse.

However science hasn’t always been able to carry out this kind of study without bias and sexism.

Several studies that I have come across appear to lay responsibility of the effects of harassment on mental health and well-being on the women who have been targeted, rather than on the individuals who commit the harassment. After associating harassment and self-objectification with negative mental health and psychological consequences, it has been recommended that women should be educated in better coping strategies so that they become more resilient to the inevitable objectifying experiences as a way to prevent mental health problems. It is this attitude – that cat-calling/street harassment/stranger harassment is a ‘normal’ experience that should just be put up with – which has allowed it to remain a prevalent and distressing problem in society.

Despite cat-calling and street harassment having been identified as an issue for at least the past 14 years, there has been no reduction in the number of women experiencing it, and there has been very little attention given to the serious effects these experiences may have on mental health. The scientific community has not escaped without bias in this area, although it has identified the association between harassment, stress and depression, and recognised that there may be a substantial psychological effect of frequent harassment. As the role of harassment on mental health gains more attention, scientists are beginning to investigate more thoroughly; including the negative effects witnessing sexism has on bystanders  and some investigation into why some men do it.

There is still a long way to go – both scientifically and socially. But with cat-calling and harassment carrying such strong risks to mental health, perhaps they should be considered as a psychological assault.

For more information about cat-calling and harassment, and how it is being tackled, visit:

http://www.stopstreetharassment.org

http://www.ihollaback.org

The Biocheminist

Biochemistry is just like cooking… but try not to eat it

When I started my PhD, I was told that if you could follow the recipe in a cookbook, you could successfully carry out most experiments (success being measured here by a lack of spilling/breaking/wasting/ruining/blowing up anything, rather than by the experiment actually working AND giving you the result you hoped for).This is because experiments normally follow a specific protocol, which is fundamentally the same as following a recipe. However, the more I’ve worked in a lab, the more I’ve seen the similarities with a kitchen… So here are some of the regular day-to-day kitchen things used commonly in the lab:

Cling film & Tin foil

Both cling film and tin foil are used on a daily basis – although special lab versions are available, normal supermarket brand versions are used a lot. Cling film is used for pretty much the same thing in labs as in the kitchen – to wrap things up for storage, to stop contamination, spillages, and evaporation. Tin foil is used to keep light out of things that may degrade in light – for example, when working with fluorescent tags and antibodies, the experiment will be kept under tin foil to prevent fading of the fluorescent signal.

Fridge freezer

The success of many an experiment is down to proper storage of your samples, and everything needs a different storage temperature. While the lab has fancy freezers set at -80˚C for RNA and long term sample storage, as well as liquid nitrogen dewars for cryopreserving cells at around -200˚C, there are also regular old fridge freezers. Fridges are set to +4˚C and are used for short term storage of DNA, some antibodies and various chemicals and reagents. Freezers are set to -20˚C, and are used to store all kinds of things, including protein samples, DNA and antibodies.

Microwave

There’s not much to say about this one! In the lab the microwave is used to heat up and melt things, although very rarely would those things ever be considered edible.

IMG_1034

Milk

Marvel skimmed milk powder in particular is a laboratory favourite. It is most commonly used to make up a ‘blocking buffer’ for western blots – this is typically 5% milk powder in a saline/detergent solution (see ‘Western What’s??’ and its comments section!)

Milkshake

Milkshake brings all the mice to the yard – I mean – helps mice learn associations. Sweetened or condensed milk and milkshakes are used as rewards in mouse and rat learning experiments. For example, a mouse may learn to press a lever in response to a flashing light because they are given a drop of delicious milkshake when they do what they are supposed to do. The milkshake is positive reinforcement – exactly the same as treating my husband to coffee & cake when he goes shopping with me without complaining. I hear from colleagues that strawberry milkshake is a mouse favourite (and also a husband favourite).

Yeast

Just like the stuff used in bread and beer! Although for lab use, it comes from a more controlled and regulated source than the dried variety from the shops. Yeast is a single-cell organism – and its simplicity has allowed the creation of various models that can be used to study fundamental processes in cells that are required for life, for example how proteins interact with each other and how the cell cycle works. It has been particularly useful because it is so easy to grow and manipulate.

Nail varnish

Not really a kitchen accessory, but I’m sure someone will have painted their nails in a kitchen at some point. Specifically the clear, quick drying variety is preferred! A common way of looking at cells under a microscope is to grow the cells on a circle of glass called a ‘coverslip.’ Then when there are enough cells, the coverslip is placed upside down onto a glass microscope slide, so that the cells lie between the two layers of glass. Clear nail varnish is then painted around the coverslip to seal it onto the microscope slide and to stop the sample from drying out.

I’m sure there must be more household things used regularly in labs – especially with scientific ingenuity and tightened budgets! I like to think I’m pretty good at cooking, and I can follow a protocol pretty darn well! However, most important of all, it’s of utmost importance to make sure there’s always enough milk, both at home and in the lab, as running out in either place can really ruin my day!

The Biocheminist

N.B. Posts will now be appearing fortnightly rather than weekly, for the sake of the posts on here and for the sake of my experiments in the lab!

Women in Research

In my relatively short time in academia, I’ve noticed something odd.

When I was an undergraduate studying psychology, almost the entire class was women with only a few men scattered around the lecture hall. But then almost all of our lecturers were men. I didn’t think too much of it at the time – there are so many career paths following a psychology degree that I assumed the women went off to something else rather than stay and take lectures and mark endless exam papers (‘if you can’t do, then teach,’ after all! – my husband *a teacher* asks me to clarify that this is a joke). But then as my course went on, I realised the majority of students don’t actually manage to get into those highly competitive career paths, which also required years of additional training. So where did those hundreds of women go? How did academia filter out and keep those few men?

I admit, I didn’t really think about it beyond that – I was more concerned with seeing the world and getting a job. As it was, I couldn’t get a job (always too under or too over qualified!) so I embarked on a PhD instead.

lego

But the same thing happened again. My neuroscience PhD course (at a different university to my undergraduate degree) had an intake of 5 students per year, and has always had females as the majority of students. My old PhD office sat 8 of us, 7 of which were women. In fact the majority of researchers (technicians, students and post-docs) in my current lab are women. But the men in professorial and powerful positions greatly outnumber the women, and this certainly doesn’t reflect the proportion of males and females lower down in the academic food chain.

Perhaps I am being paranoid? Or just over-thinking it? Because I don’t believe for a second that women are less capable of achieving those top positions (many do manage and are great role models). And there’s no sexism in academia, is there? Is there??

So I looked it up, and I found some statistics.

I despaired that the statistics (from the UK and from the U.S) confirmed my observations that there is a huge disparity between the number of women entering scientific research jobs and those carving out successful academic careers (although I was also pretty pleased that I wasn’t just making it up!).

In a 2012 report by WISE, it was found that more girls took A-level biology then boys, and what’s more, girls got higher grades in STEM (science, technology, engineering & maths) subjects! (There doesn’t seem to be an exact equivalent of A-levels in the USA, but they seem to be somewhere between APs and freshman year).

So us women are obviously good at science at school, and actively show an interest in it. But another report by the Royal Society of Chemistry (RSC) identified that 25% of women stayed in science following their undergraduate degrees, compared to 40% of men, and a lower proportion of women stayed in research following their PhDs compared to men. For 2012-2013, the Higher Education Statistics Agency showed that despite women making up 45% of all academic staff in the UK, they made up only 22% of all university professors.

So why are we dropping out of a subject we enjoy and are good at, along what the RSC describes as a ‘leaky pipeline’? Why aren’t more of us getting to the top?

Drip (1)

Well, it seems to be a combination of things – some of which I’ve experienced, some I’ve heard about, and some I’m preparing for. Here are just three of the biggest contributors:

  1. Babies

I don’t like to concede that having a family can be detrimental for a woman in academia, but it is. And it’s a biggie. Academia is very competitive. In many cases, taking time out for maternity leave (if you even get maternity leave) is not compatible with keeping up with the latest research, meeting deadlines, running experiments and publishing papers. All of these are essential for getting the best grants, the most funding and the most sought after jobs. Having a baby can be viewed as putting yourself out of the game, and it’s not easy to get back in. Pregnancy and maternity leave can be a long period of time that men just don’t need to consider or make up for.

And, oh, the planning that has to go into it! I recently got married, and of course the inevitable baby questions have come in waves – including from supervisors and colleagues. Perhaps on the face of it, it seems like an intrusive question, but for a woman in science it’s a very important consideration that can impact her career, so it probably needs discussing. While I don’t intend to get going on that issue for a few years, I feel like I ought to start planning when will be the best time NOW, and it’s getting pretty complicated. As I mention in a previous post, moving between labs internationally is considered the best thing you can do for your research career. So would I be happy starting a family abroad? Where in the world will I even be in a few years? What if I can’t get a job abroad in the next few years, start a family and then lose the opportunity to take a contract somewhere else? What if I get a job abroad and leave it too late? And the questions continue! A sacrifice or a concession is going to have to make an appearance somewhere, and it’s easy to see how and why so many women sacrifice career progression for a family – for most women, short contracts and instability aren’t compatible with propagating the human race.

Of course, men will have similar family pressures, but their lack of a uterus and the prevalence of traditional parenting roles make this an issue that is easier for them to get around.

But it’s not all about babies!

Plenty of women in the top academic jobs have families. So it’s not a total road block to success. But it does make things more difficult for women compared to men in the same position.

  1. Education doesn’t eradicate misogyny and sexism

You’d think that years of high level education and being at the forefront of scientific advancement and human understanding would prevent prejudiced attitudes, but no, not entirely. Attitudes to women within academia are no better or worse than in the ‘real world.’ I’ve never witnessed any overt sex discrimination at work, but the subtleties are pretty common. For example, I have overheard a male professor refer to a female post-doc as ‘cutie’ in the lab (a term not reserved for his male post-docs, I assume). Whether she is happy to be called this is her business, and perhaps this language can be put down to senior staff belonging an ‘older generation.’ But perhaps more disturbing is hearing a male PhD student discussing a senior female member of staff and declaring that he didn’t respect her work or think she was any good, partly because ‘she wasn’t even good-looking.’ If the new generation of ‘forward thinkers’ believe that the abilities of a female scientist, MUCH more qualified and experienced than themselves, can be judged by their appearance, then what hope do we have??!?! Thankfully the people that hold these opinions are in the minority – but at the moment the minority are becoming heads of department…

Now, I very much doubt an arrogant PhD student is going to affect the career choices of an established member of staff. But in environments where these attitudes are more common, where female staff are judged by their clothing, where their chest achieves more eye contact than their face, and where their jobs are considered less prestigious because they are held by a woman, many less senior women and students won’t feel comfortable and won’t want to work there. Understandably.

OLYMPUS DIGITAL CAMERA

  1. We don’t talk about it. Or do we talk about it too much?

In my research for this post, I came across numerous articles that claimed that women are told from the beginning of their research careers that they will hit more roadblocks, will be discriminated against and will have a tougher time than their male colleagues – and that this warning is enough to push women out of a research career at an early stage. While I can totally understand why this would be the case, I’ve never actually been told this (except for the multiple, disheartening times I have now read it on the internet!).

Perhaps there is a disparity between the UK and the USA? We tend to be a bit more reserved in the UK, so maybe that’s why no one has ever discussed this with me or my fellow colleagues. But I think this lack of discussion is probably just as damaging (if not more so) than terrifying warnings! For example, I intended to send an email to some of my colleagues asking if they had ever experienced any instances of sexism at work, or if they thought there was a problem with the proportion of women making it to top jobs. I typed it out, I put in the addressees…and then I deleted it! I’m ashamed to say I was too embarrassed to send it – I’ve never discussed this issue with any of my colleagues and I have no idea what they think about the position of women in science. If it was more openly discussed, perhaps it would be easier to challenge inappropriate behaviour and attitudes in academic workplaces, and provide more support.

In short, there are multiple reasons why we are losing talented female scientists from academia, but they can be tackled.

Programmes are now in place that are aiming to change attitudes and increase support – for example, the Athena SWAN charter is now in place at 114 UK universities, and aims to support and encourage women in science, as well as to promote diversity and equality.  The problem has been reported in the media, academics are increasingly aware of it, and both women and men are becoming less accepting that things ‘are just the way they are.’

Hopefully then, we will soon start to see less women leaving academia, and a more equal representation of women in those top positions, respected for their talent, intelligence, abilities and hard work.

The Biocheminist

For more…

https://www.insidehighered.com/news/2008/06/12/women

http://www.bbc.co.uk/news/education-26259644

http://www.telegraph.co.uk/women/womens-life/9892502/A-big-brain-isnt-enough-in-academia-you-need-style-too.html

http://www.theguardian.com/higher-education-network/blog/2012/may/24/why-women-leave-academia

http://www.ecu.ac.uk/

http://www.aauw.org/resource/why-so-few-women-in-science-technology-engineering-mathematics/

http://www.rsc.org/images/womensretention_tcm18-139215.pdf

http://www.wisecampaign.org.uk/files/useruploads/files/wise_stats_document_final.pdf

PC Arrrggghh!

PCR (see what I did there?) isn’t as terrifying as the title of this post suggests – although it has been known to induce screams of frustration in poor hard working students and researchers!

IMG_1014

So what is PCR?

The ‘Polymerase Chain Reaction’ (don’t worry – the meaning of this will become completely clear!) is one of the most commonly used lab techniques. Briefly, it is a technique to ‘bulk up’ (or Amplify) a certain piece of DNA that you are interested in from a sample of mixed bits of DNA. This makes it easier to find the interesting piece among all the boring bits you aren’t interested in.

There are several reasons why you might want to do this, but most frequently it is used to check whether a certain piece of DNA is present in your sample.

Why would that be useful?

  • You would want to do this if you tried to change the DNA in a cell line or animal model and you need to check if it worked.
  • Alternatively, PCR can be used to see if there are any natural mutations between, say, the DNA from ‘healthy’ people and the DNA from a group of people with a particular disease
  • Along the same lines as above, it can be used to test for genetic diseases (where the cause of the disease is known to be in the DNA and is passed down through families)
  • Out of the research lab and hospitals, it is also the technique used for paternity testing (as appears frequently on The Jeremy Kyle Show etc) and in forensic science (g. as seen in CSI) – in these cases, two DNA samples are compared for their similarity to each other, or in the case of forensic science, it can also create a much larger sample for testing from an initially very small trace of DNA left at a crime scene.

IMG_1015

Finding what you want in the DN-hAystack! (LOL)

But what actually is the polymerase chain reaction?

 

It’s a tricky one to explain as there are several stages, so first I’ll note the two important things you need to start with, followed by the process itself. A word of warning – it’s one of those things where looking at the pictures really helps!

You need to:

  1. Collect your DNA sample

This might be from a cell line, from a lab rat sample or from a sample taken from a patient or volunteer. Typically, the DNA is then ‘extracted’ from the cells (as animal/human samples will consist of cells – which is also where DNA is stored). By extracting the DNA and getting rid of the other bits of the cell, you should get a ‘cleaner’ sample that is less likely to fail during the PCR. This sample is called the ‘DNA template.’

  1. Prepare your primers

DNA is made up by a chain of ‘bases’ (or ‘nucleotides’) – Cytosine, Guanine, Adenine and Thymine (C, G, A and T), which pair together to form a double stranded DNA helix. In order for the PCR technique to amplify the part of DNA you are interested in, you need to tell it what part of the DNA to pay attention to. This is done with ‘primers’ (as they prime the reaction). Primers are short, single stranded chains of bases that are designed to match (‘complement’) the sequence of bases on the interesting region of DNA.

Steps for PCR:

  1. Denaturation

This is basically just separating the two strands of DNA from each other to form single strands. This is so that the primers are able to pair with their complementary sequence on the DNA strand. Denaturation is done by briefly heating the DNA to 94-98˚C.

pcr1

  1. Annealing

The temperature is dropped to 50-65˚C to allow the primers to pair (‘anneal’) with the DNA

pcr2

  1. Extension

An enzyme called DNA Polymerasehence the ‘P’ in ‘PCR!) recognises the primer-DNA pair, and recruits spare bases/nucleotides from the surrounding area (these are added by the researcher, along with the DNA polymerase).

The DNA polymerase is typically taken from a bacteria called ‘Thermus Aquaticus’ and is referred to as ‘Taq’ – this is used because it can withstand the high temperature used in step 1, whereas polymerase from any other source would break down and stop working.

The polymerase then synthesises a new strand of DNA that matches the original strand of DNA. Primers are designed to match both strands of DNA (as the sequence of the second strand will be reversed compared to the first), so during the extension phase, both the ‘forward’ and ‘reverse’ strands of DNA are synthesised to form a copy of double stranded DNA.

pcr3

  1. And repeat!

The previous three stages are repeated between 20-40 times. Each time it is repeated, the amount of DNA is doubled, so that there is an exponential increase in the number of copies of the DNA sequence you are interested in (until the spare nucleotides and DNA polymerase run out).

Now you have loads of a specific sequence of DNA! Yay!

The resulting DNA can be passed through a gel and separated by size in the same manner as the protein described in Western Whats?‘ A difference in size indicates a different nucleotide sequence, and therefore a potential mutation, or mismatch between samples. Large amounts of amplified DNA are also required for other biochemical techniques such as sequencing (which reads the whole sequence of the DNA strand one nucleotide at a time) or for inserting into the DNA of another organism, such as yeast or bacteria (with the purpose of seeing the effect this region of DNA may have on a cell).

So that seems pretty straightforward, right?

Well, yes, PCR is one of those things that can be very easy – but only when it works! Unfortunately, every stage of PCR is very sensitive to disruption, and many different sequences of DNA will need slightly different conditions for the PCR to work. For example, both too much and too little template DNA can completely ruin a PCR, and if the primers are not specific enough to the region of interest, they can pair with the wrong section of your template DNA and cause all kinds of rubbish to be amplified!

It therefore takes an experienced/skilled/lucky researcher to get a perfect PCR first time; otherwise you start to hear those screams….

 

The Biocheminist

 

Did this post make sense to non-scientists? I’d love some feedback on how understandable my posts are and if I’m managing to explain biochemistry and neuroscience to you!

Is there more about PCR you would like to know? Or are there any other lab techniques or neurological diseases you’d like to learn more about? Comment below!

Postdoc-ing around

I have recently graduated from my PhD and am working in my first post-doctoral (post-doc) position.

Before I carry on with the rest of this post, I feel I should clarify that I love my job and I intend to stay in academic research!

However, to continue an academic research career beyond your PhD is very competitive and establishing yourself as a respectable, employable scientist can be incredibly taxing and stressful. But if it wasn’t difficult, then it wouldn’t be worth doing!! Right?!

 OLYMPUS DIGITAL CAMERA

(Me at graduation, pondering science – probably)

The major problem that many PhD students face when nearing the end of their project is that there are not as many post-doc jobs as there are post-docs. Of course, not every new PhD wants to stay in academia and they may leave of their own volition, but there are also those that have no choice but to put their academic careers on hold. A quick ‘Google’ brought up various diagrams and reports floating around the internet that place the percentage of PhD students who continue onto an academic post-doc position at around just 20%. From my own experience and that of my colleagues, this figure isn’t all that surprising, and is likely to be a result of a lack of funding/jobs available, stress and a desire for financial stability.

Following a PhD, there are three main avenues to a post-doc job, each with their own pros and cons;

  1. Your supervisor/someone they know has money to be able to pay you and keep you in their lab

Pros: You know the lab and the people, you will most likely be working on a project you are already involved in or know about

Cons: There may not be much money available, and may only extend to a couple of months work. On the other hand, if there is a constant stream of funding available you may up staying for several years and getting ‘stuck.’ It is increasingly expected that new researchers move around different labs and establish an international career (referred to as ‘mobility’, and remaining in the same lab as you completed your PhD for several more years can be viewed unfavourably.

  1. You apply for advertised post-doc positions in different labs and universities

Pros: You can move on from your PhD lab. If you didn’t enjoy the subject you were researching you have the opportunity to transfer your skills to a new research area. You experience a different lab, meet new people, make new connections, and enter a ready-set up project often with a clear job description and goals.

Cons: You may have only applied to positions in different research areas because there were no available jobs in your area of interest, and you don’t really want to move or change. This option is also made more difficult and stressful when post-docs have families (especially when considering jobs abroad).

  1. You apply for your own grants and funding to carry out the research you want to carry out

Pros: A first step towards independence! You outline the project you want to do, and if a panel of scientists decide it is worthy, the project (and hopefully your salary) is funded. It looks very good on your CV. Depending on the type of funding awarded, you may be able to choose where you work (as most labs will allow you to work there if you bring your own money!)

Cons: These are the most difficult positions to get, especially at the very beginning stages of an academic career. They are therefore very competitive and very demanding; there is often more responsibility involved as you have more control over your account, administration and organisation. There are several opportunities available for new post-docs, but the majority of grants are designed for more experienced researchers or whole lab groups.

So once you have your first post-doc, perhaps by one of these three avenues, everything is sorted, surely?

lego

(Popularising women in science, FTW!)

Well, no, not really. The first few years following a PhD can be incredibly unstable – contracts can be as short as 6 months, and are rarely longer than 3 years, with many being around just 1-2 years long. After this period of time, there is no guarantee that more funding will be given to your lab to keep you, or that you can win your own funding to carry on your work, and the whole process starts again. It is this instability and lack of financial protection that causes many scientists to leave academia for more reliable jobs and careers.  

I am currently working on a small grant I won to carry out a project I designed for my first post-doc position, which is in the same lab as I completed my PhD, but this contract ends in March, so the search for my next source of funding is beginning now (although it should probably have begun a few months ago!). I remain determined to stay in academia and to pursue my research interests, but it’s not going to be an easy fight!

The Biocheminist

Why do I love research?

After a week or so of failed experiments, I ask myself – why do I love research? Why do I keep doing it?

Because science is cool.

Really. It is.

What I love so much about my job is the being given the ability and opportunity to discover something about a cell, or about the brain, or about a particular disease, which no one in the world knows, or has ever known before.

It might be the tiniest, potentially most insignificant piece of information, but that doesn’t matter to me – I found it. I discovered it. And that’s awesome!

Towards the end of my PhD, I discovered that this is where the difference lies between those who want to carry on doing research, and those who are sick of it – I’ll explain further:

Research is hard. Most of the time, what you are doing won’t give any results. You can slog away for hours, days, months in the lab, and still not getting what you’re after.

(Now, I don’t mean not getting the result that you want, I mean any result! New experiments need practising, optimising, preparation, collaborations with other labs etc, all of which take time and usually don’t work on initial attempts.)

I’ve observed that the difference in enthusiasm for research lies in the response not only to these tricky (some would say horrendous/soul destroying) periods, but to when the experiment works following the previous failure.  For those who don’t want to stay in research, these small successes aren’t worth the hassle, and don’t counteract the months of torment.

For those who stay, it’s worth it.

No matter how small or trivial the result may be, the buzz you get from finding it is worth everything it took to get there. An enduring curiosity, drive and passion keep you going, because you just want to know the answer – probably to an arbitrary question you’ve asked yourself.

But it still matters. It’s still worth it.

%d bloggers like this: