Archives for category: Uncategorized
The Milk Way as Seen with Mount Rainer in the foreground.

The Milky Way Exploding from Mount Rainer (Photography by David Morrow) – Smithsonian/Photocontext 2013

Navigating using the stars is thousands if not tens of thousands of years old, but even so I’ve heard it said by a few outdoor instructors recently that travelling at night is something that should be avoided if possible because of the potential to becoming lost, falling down something, over something or some other form of darkness related accident. In opposition to this, I would like to say that I have been travelling at night for over thirty years and rather than treating it as something to be frightened of, I consider it a friend that I have become familiar with. Equally, seafarers have been finding their way across the vast expanses of the ocean using the movement of the sun, moon and stars ever since humans first pushed their craft into the sea and strayed away from the sight of the coast.

I would however, totally agree that the British weather does have a major say in our ability to navigate, whether it be during the day or night given that a decent hill fog, rain, fog or snow can hugely restrict out ability to work out where we are. I can personally reveal that my worst experience of being lost was at night on a hillside in Shropshire when all I could see was my feet and I was reduced to navigating by feeling the slope of the hill and a mental map of the territory. This is very similar to the story told of someone learning to use Polynesian methods of navigation from an old Polynesian master navigator and how during the night when they could see nothing, the master would lie down in the bottom of the boat and ‘feel’ the waves and how they were hitting the boat in order to find his way across the ocean.

In that respect I do agree therefore that there will be times when you will not be able to see your hand in front of your face, let alone the north pole star, Polaris at 434 light years distance. However, given that the ability to read the sky is a skill that is not limited to our own little, weather-beaten island, but is a skill that is transferable across the globe, then learning how the position and movement of all the celestial objects can be used to determine time, place and direction, is commensurate with providing us with the confidence to finding our way. (For budding astronauts out there, star navigation is also an important part of space travel as well, given than magnetic north and south has no meaning outside of the Earth’s influence.)

History of Star Navigation

We as humans have possibly had a very long relationship with the heavens and the study of the movement of celestial bodies across them, given that some of the great prehistoric ritualistic and religious monuments of the world show some form of celestial alignment. Stonehenge, the mysterious Carnac stones of Brittany, the vast Newgrange Passage Grave in the Boyne Valley in ireland, the Great Pyramids of Egypt, the Sun Temple of  the city of Teotihuacan in Mesoamerica etc etc. Even today this association with the sky has been preserved through the alignment of nearly every Christian Church, given that they are aligned east-west.

However rather than geographic position, these early structures were celebrating a certain position in time i.e. the longest and shortest days of the year (solstices), the rises and setting of the sun and might been associated with our ancient farming forebears’ desire to celebrate specific important points in the farming calendar. However, given that the cycle of the year would be just as important to a hunter/gatherer community in that the same importance would be placed on the migration of animals and the appearance of certain plants during the year, we must suppose that humans must have been studying the night sky from very early in their cultural development.

Our earliest textual references to using the stars to navigate by, comes from one of the earliest written pieces of literature in the world. Homer’s Odyssey – a text most likely from the 12th century BC – refers to Calypso telling Odysseus how to use the stars to navigate by, as he sailed eastward from her island.

Gladly then did goodly Odysseus spread his sail to the breeze; [270] and he sat and guided his raft skilfully with the steering-oar, nor did sleep fall upon his eyelids, as he watched the Pleiads, and late-setting Bootes, and the Bear, which men also call the Wain, which ever circles where it is and watches Orion, [275] and alone has no part in the baths of Ocean. For this star Calypso, the beautiful goddess, had bidden him to keep on the left hand as he sailed over the sea. For seventeen days then he sailed over the sea, and on the eighteenth appeared the shadowy mountains [280] of the land of the Phaeacians, where it lay nearest to him; and it shewed like unto a shield in the misty deep

The mention of Pleiades (Seven Sisters), Bootes, the Bear (Ursa Major or the Plough to us Brits) and Orion describes some of the most easily identifiable constellations in the night sky and also indicates how influential Greek astronomers were to our understanding and naming of most of the major celestial bodies that we see today – although many have acquired the Arabic translation of an original Greek name because of Islam’s inheritance of Greek learning after the fall of the Roman Empire and Europe’s descent into the culturally backwaters of the medieval period.

We are therefore aware that the Greeks were using stars to navigate by at the very earliest part of their history and this must have allowed men like Pytheas of Massalia – an astronomer and geographer – to navigate by ship from Greece through the Strait of Gibraltar to the British Isles and possibly Norway, describing on his journey the unique experience for someone from the Mediterranean, of the sight of the Midnight Sun, polar ice, Germanic tribes and possibly Stonehenge, all in 325BC.

Secondary evidence would suggest however that perhaps celestial navigation goes back even further to through other cultures such the Chinese, since it was the Chinese Han Dynasty that invented the magnetic compass in the 2nd century BC and building ships that could carry 60 tonnes of cargo. Also the Indus Harrapaean civilisation of the late fourth to late 2nd millennium BC civilisation was building large canals and deep water ports in order to transport goods by sea as far as Egypt. But even here perhaps these cultures were drawing on a far great depth of understanding about the movement of stars given that cliff carvings at Damaidi in Ningxia dated from the 7th millennium BC feature pictographs of the words for sun, moon and stars and represent some of the earliest occurrence of written Chinese. However that is still young compared to the generally accepted theory for human migration by boat to Australian over 50,000  – 60,000 years ago which involved a lengthy crossing of open water, out of sight of land.

Obviously before the advent of charts and compasses and time pieces, navigation was much more a skill of way-finding i.e. knowing where you were in relation to where you wanted to get to and using the movement of the celestial bodies, as well as other natural occurring features, to decide which direction that you need to head. In that respect knowing the exact points of the compass i.e. north, east, south and west is not the primary importance, rather it was a case of understanding how the sky changed on a daily basis in terms of the track of the sun, moon and stars and equally how these movements changed throughout the year as different stars and constellations appeared and disappeared or how the sun and moon altered their paths across the sky.

Basics of Star Navigation

Becoming more practical about star navigation is probably where most people would like me to get to and therefore I will begin.

If you only know one thing – know about Polaris

In the UK, most people are aware of the ‘The Plough”, which is the most easily recognisable asterism in the night Sky – an asterism by the way is the term given to the names given to the shapes the Greeks saw in star patterns, it differs from constellations in that a constellation is a group of star within the same neighbourhood, the Pleisades being an example of this. The Plough however is not a constellation since the stars within the shape are actually huge distances apart but in an alignment that makes them look at the same distance.

From the Plough, most navigation courses will show how someone can find Polaris (the North Star), which is an important star in that is sits roughly (within about 7 degrees) of the North pole and therefore the night sky for us in the northern hemisphere, rotates around this star. As an aside – which I like but it has no impact on us at all – it has only been the North Star for a comparatively short amount of time and the Greeks in the 1st millennium BC would have looked within the Dragon shaped asterism of Draco, that weaves between Ursa major and Ursa Minor (Great and Little Bears) for their North Star.

Obviously once we have Polaris, then we have north and then can very easily work out the other points of the compass. As a quick note, just because Polaris is over the North Pole, do not presume that you are looking for a bright star, Polaris is not that bright and without the guide from Ursa Major, it would be quite difficult to locate.

Star Navigation is so much more that Points of the Compass

Finding the points of the compass from the night sky is fairly straight forward. Polaris is the best option, but also you can use the ‘dagger’ hanging from the great winter constellation of Orion the Hunter to determine a rough estimate of where south is. Rather conveniently, Sirius – one of the brightest stars in the night sky in winter hangs low down to the horizon and at during the early part of the year, is Also if the moon is Gibbous i.e waxing or waning, then extending the line made by the crescent, to the horizon will provide another rough indicator of south. Planets will also rise at certain times of the year in known locations and rather conveniently, because all the planets within the solar system lie roughly on the same plane, all the visible planets will be on what astronomers calls the elliptic and therefore form an arc across the sky. Pluto is different – in many ways – from this, but if you can locate that far flung body in the night sky then you must have access to some serious equipment.

NB: On a completely useless note, you can also roughly estimate south by looking at the outside of a house, as long as it has a satellite dish, given that all satellite dishes point to 28 degrees from due south. Of course you could alternatively just knock on their door and ask, the choice is yours.

Anyway, this is all useful to a certain extent, but knowing where north, south east and west are is only a start. It does not provide you with any great idea of where you are and only of some use as to where you need to go, since if you don’t know where you are, then getting to where you want to go is going to prove somewhat difficult. As the old Irish joke goes, the source of which I have not found and may not prove to be Irish at all: “Well sire, if I was you I wouldn’t start from here“.

Polaris in this case does have a second use in that it’s height above the horizon as the observer looks north, is the same as the latitude of the observer, in other words, if Polaris is 52 degrees above the horizon from the aspect of an observer, then the latitude of the observer is 52 degrees, which is the position of the UK from the equator. As a rough guide a finger is one and half degrees if you hold you arm out straight and point you hand in front of your eye line. Alternatively and more easily achievable is knowing how many degrees your hand is (mine is 7.5 degrees) and conducting a quick game of “one potato, two potato, three potato four” to reach the position of polaris in the sky. This method is never going to give you enough accuracy to be truly useful on land, given that 52 degrees of latitude will mean that you have managed to narrow your location down to somewhere within the British Isles, or more probably northern Europe……probably….. and that is supposing that you don’t have fat fingers.

We can of course, instil more accuracy in this by using better methods than ‘figure power’. Early astronomers and seafarers realised this and invented instruments that measured inclination such as the maritime astrolabe, the cross staff and quadrant, all which were essentially methods of measuring the angle of stars, mainly Polaris, above a stable and flat horizon. Of course all these instruments were forerunners of the sextant, that along with the maritime chronometer, revolutionised navigation in the 18th century, but was still based on the observer’s angular relationship to specific celestial bodies.

However, if we disregard the sextant method of star navigation for this article, given that if you carry a sextant you would suppose that you know how to use it, then we fall back to using the track of stars as a way-finding technique and in this respect we can do no better than follow the Polynesian tradition of seafaring navigation.

Star Navigation (Part Two)

Part Two to Star Navigation will be an advanced guide and how observing the motion of the sun and moon as well as how the stars and constellations move during the night and during the course of the year, can be extremely useful as a navigational skill. I will explain how the moon travels in the opposite direction to how we observe it around the earth, how the Greeks made the night sky into a Greek Tragedy and in this respect we’ll see how the parents of Andromeda really mucked it up for their daughter, how Orion became to big for his own hunting boots and why Hercules stands close to the horizon. Also we’ll discuss some of the great astronomical members of the sky, such as the vast star of Betelguese, (a star beloved by all readers of the Hitch Hikers Guide to the Galaxy) and its importance amongst the Winter Triangle and why we should feel a certain fondness for the Orion Nebula.

 

 

 

Advertisements

blackadder_baldrick_470_470x200

I have just been listening to the BBC Radio 4 programme, Costing the Earth: Feeding the Crops of the Future, where they discuss the emerging technologies that are being developed on how to extract urine from human waste at the toilet in order that is could potentially be used for the fertilisation of crops. It is slightly off kilter in that the discussion was about what is contained within urine that makes it potentially useful as a form of fertiliser, however what I did find interesting is that this is being described as something new, whereas urine and fecal matter have long been considered very useful human by-products that were collected and processed for a variety of uses. It is only within the post-war period that society has become more squeamish and prefers to treat such products of human biology as ‘dirt’ rather than a useful asset. This is also inline with our fascination with sterile conditions that has caused the rise in allergies and antibiotic resistance bacteria such as C Difficille and MRSA.

Anyhow, back on track, lets get down to drinking urine for survival.

Oh no, he’s been swigging his for ages
Now, I don’t know if anyone remembers the ‘Potato’ episode from Rowan Atkinson’s TV comedy programme of the 1986 ‘Black Adder II’  in which Percy, Black Adder and Baldrick set forth to find fame and fortune on the high seas only to end up with a captain who is unable even to find France and therefore are reduced to drinking their own urine:

Edmund: Yes, well, I’m not feeling very thirsty at the moment. I had an egg-cup full of stagnant water three weeks ago. Oh all right, come on, let’s get on with it. [He stands up]

Baldrick: Should we drink each other’s or stick to our own?

Edmund: Is Captain Rum joining us for this bring-a-sample party, or is he going to sit this one out?

 Percy: Oh no, he’s been swigging his for ages. He says he likes it. Actually, come to think of it, he started before the water ran out.

This particular episode seems to have great meaning for me since it also contains the sketch when Meltchet hands a map to Black Adder just before he is due to set out and remarks that it has been put together by the foremost cartographers of the time and when Black Adder notes that it is blank, Meltchet retorts, that if he could fill it in as he goes along, they would be very grateful.

What is in this Urine!
Anyway, this is not meant to be a trip down memory lane through the various TV comedies of my youth. Instead it is a good place to start the answer to the question that always gets asked on our survival courses, which is there any situation when you would consider drinking your own urine…or anyone else’s urine for that matter.

 “Urine is a sterile liquid by-product of the body secreted by the kidneys through a process called urination and excreted through the urethra.”

NASA and Urine
A 1971 report by  NASA gives a fairly comprehensive description of what is in human urine, in the document “Composition and Concentrative Properties of Human Urine“, which was written in order to understand how to extract potable water from urine which was essential for human long duration spaceflights. It shows that urine is mostly water, 95%, and the remaining constituents in decreasing levels of concentration are electrolytes, nitrogenous compounds, potassium, vitamins, hormones, organic acids and miscellaneous organic and inorganic compounds.

This doesn’t make for a great recipe for a nice refreshing cocktail, but essentially healthy urine is non-toxic and sterile. (Urine is sterile as long as it remains in the bladder, since the kidneys take care of that, however as it passes down the urethra it is exposed to bacteria and obviously once in the ‘open-air’ so to speak it begins to become less sterile – therefore if you are going to drink it, drink it fresh and straight from the ‘tap’).

Sodium in Urine
However, all ‘normal’ survival manuals do not recommend drinking urine because of level of sodium in the urine, (about 2% normally, but increases as dehydration causes the urine to become more concentrated). The level of sodium is not a real problem if you are able to dilute the urine or drink fresh water before or after drinking urine, but then again you would suppose that you wouldn’t be drinking your own urine if you had better source of water, unless you are like Captain Rum in Black Adder and have a taste for it.

An example drinking urine in diluted form comes from an activity close to my heart. In 2008 the ultra-runner Tom Puzak in a 135 mile race in Minnesota in the winter, became so dehydrated shortly before the end or the race that he put snow into his cup and then urinated to melt the snow and therefore was able to complete the race in an admirable second place, in a race where the majority of of the 142 who started the race had dropped out.

Urine: A Finite Supply
Drinking urine therefore is not going to do you any harm immediately, however if you are drinking it because of the lack of other sources of water due to dry environmental conditions – (it does not necessary have to be hot since dry cold is still effective in stripping water from the human body coupled with the fact that cold suppresses humans’ thirst response) – and over a lengthy period of time, then drinking urine will compound the issue by returning ‘waste’ products to the kidneys and as water is also lost through perspiration and exhalation then these waste products will build up in the kidneys as the urine excreted becomes more concentrated, to the extent that they might cause renal failure. Whatever reason for drinking urine, there is therefore a limited supply that will ultimately dry up.

The amount of sodium in urine is exactly the same reason why seawater is so dangerous for someone suffering from dehydration and along with blood and urine, they are the three substances that are listed in the US Army’s survival manual as inappropriate substitutes for ‘potable’ water.

Urine and Crush Syndrome – “Probably not”
Sodium is not the only issue however as urine also contains other constituents such as potassium and phosphates and this has implications for people who suffer crush syndrome – (otherwise known as Bywater syndrome after Eric Bywater who discovered the conditions in victims crushed in collapsed buildings during the Biltz of London in 1941). It is seen primarily in car crash and earthquake victims and is one of the most challenging conditions to deal with in first aid or field medicine.

Crush syndrome is a condition that occurs as the result of a crush injury to a part of the human body, this releases myoglobin, potassium and phosphates into the blood stream as a result of the breakdown of muscle tissue that is being crushed and therefore when the crushed limb is released and the blood begins to flow back into the crushed area, if this is done without expert intervention, then this causes ‘reperfusion syndrome’ when these three products flood the rest of the body via the blood stream and ultimately cause renal failure. It is also called “Smiling Death” as victims at first seem to gain use of the crushed limb and may become cheerful before release, but then die shortly afterwards. Obviously if you are adding the potassium and phosphates from urine into a body that is also dealing with crush syndrome, then you potentially are increasing the danger of renal failure.

To counteract this however, in 2008, Shen Peiyun spent 146 hours with his leg wedged underneath a piece of ceiling in the rubble of China’s giant May 12 earthquake in Sichuan province and survived by conserving energy and pooling his urine with “what felt like leaves”. During the same earthquake, other survivors also reported drinking urine to survive, such as Wang Chunbang, a miner from a manganese pit in Qingchuan that was damaged by the earthquake, who said he drank it to stay hydrated. When Shen was interview about his miraculous escape, he answered:  “Forget life miracles. I drank my own urine.”

Camel Urine – you’ll have to ask nicely!
One last example of drinking urine to survive comes from history from a story that caused a great stir when it was published under the pithy title “Authentic Narrative of the Loss of the American Brig Commerce by the Late Master and Supercargo James Riley”, later retitled “Sufferings in Africa” and a book that the president Abraham Lincoln listed as one the three most influential works – alongside the Bible and and The Pilgrim’s Progress – that shaped his political ideology, particularly his views on slavery.

It describes the loss of the Brig the Commerce off the coast of Western Sahara in 1815 and the eleven man crew and captain, James Riley, who managed to reach shore, only to be attacked by Sahrawi natives who drove the crew back out to sea and who then travelled 300 miles further down the coast to near Cape Barbas where they landed again due to the lack of food and water and resolved to either die from starvation and dehydration, or offer themselves up as slaves to the first tribe they encounter, which is exactly what happened.

They were found by nomads of the Oulad Bou Sbaa tribe who enslaved the crew and their story as told by James Riley contains extreme dehydration, severe starvation and ever-present brutality while roaming the Sahara desert with their captors. It was during this episode that the crew and captain were forced to drink urine, but not only their own, but also camel urine to survive. (I won’t get into the discussion of the drinking of camel urine as this is a much argued area that any google search will highlight and is way out of the scope of this article).

It was through a happen chance meeting with a pair of Arab slave traders that James Riley was able to negotiate a passage to the nearest town, where by luck the local consul, William Willshire, agreed to paid for their release and Riley and most of his crew returned home to the America where James Riley established the town of Willshire in Ohio which was named in William Willshire’s honour.

Therefore all I can say to all of the above is that in certain circumstances it may prove necessary to drink urine, however there are ways of distilling the water out of the urine by means of a solar still, but that my final words are,  Cheers, Bottoms Up! (“Bottoms Up” – Another allusion to Black Adder II and no prizes whatsoever to any who know where it comes from, but it sounds almost exactly like F@**).

Barefoot is a revolution that will not easily be reached.

Barefoot is a revolution that will not easily be reached.

This is not just another article about how great barefoot running is. Instead this article is an attempt to debunk some of the facts stated in the recent BBC Four programme “Dissected: The Incredible Human Hand and Foot” presented by Dr George McGavin. It is also an article that comes at a time when it would seem that the latest vogue for running in minimalist shoes has passed, given the evidence for the drop in sales globally. It would seem that runners have tried barefoot running and most have come away less than infused with a new spirit of liberation.

So what has happened and why did barefoot running not turn out to be the panacea that it was meant to be for running injuries?

The BBC Four programme was fascinating in many respects, for we don’t often get to see the actual anatomical makeup of a human foot away from pages of a book or a colourful image on a computer screen. However one area of the programme on the human foot that did disappoint me was the section on barefoot running as this seems to have lacked sufficient research and therefore stalled upon a rather simple and misunderstood piece of evidence.

I’ve called this article, “Barefoot: Forward the Revolution” not because I’m attempting to re-energise the revolution on barefoot running, but actually I’m using the term ‘revolution’ in a scientific way. The BBC Four Programme showed the tv experiment of a woman running across a section of floor with pressure sensors on it; the first time barefoot and walking, the second time running in running shoes and a third time running barefoot. The scientist then explained through the colourful visualisation on the computer screen, how the loadings differed between walking barefoot, where the weight landed on the heel and was transfer longitudinally through the foot to the big toe, against a fairly similar pressure reading when the woman ran in shoes. The final experiment with the woman running barefoot showed no pressure being exerted through the heel and instead the woman was landing on her forefoot only. This was given as an example of why shoes might be deleterious to us as humans as it has allowed us to develop an unnatural running gait and that by simple extrapolation, our natural gait is landing on our forefoot and not using our heel.

Now, where do I start!

Barefoot Running and the Experts

Over the years that I’ve been talking and researching the biomechanics of barefoot running and walking, most traditional experts in the field of biomechanics that deal with the foot have completely derided the barefoot ethos and have pointed to the large amount of evidence that suggests barefoot running increases the occurrence of acute or chronic running injury. Therefore they proudly expound their belief that barefoot running has not turned out to be the panacea it was claimed to be and that remedial measures are more efficacious.

Absolutely true, I’m not going to even try and claim otherwise. However what I argue is that, if you take a plaster cast off a limb that has been worn since birth, and then subject that limb to a type of work for which it has always had the help of the plaster cast before, then surprise surprise you are going to get some issues. The analogy works very well for barefoot running in that, if you take the shoe off foot and then run on it, the poor foot and everything connected with it is going to be subjected to an experience it has never had before and cries “HELP!”.

Going back to the BBC programme “Dissected: The Incredible Human Hand and Foot“, makes the point very well when it showed the woman running across the floor of the lab. You could tell that this woman had probably never run barefoot before as a runner with any lengthly exposure to barefoot running would never run purely on their forefoot – you might do for a week, perhaps a couple of weeks, however I can guarantee that within that period, two things will happen. First the achilles tendon with react sharply to the new loadings being applied through it and stiffen up quite painfully and the second will be the metatarsals in the forefoot will also be subjected to extreme loads and will probably develop pressure fractures.

Those out there who have never run barefoot are going to wonder now why then we as humans would ever want to run barefoot, since it seems to be so dangerous to the human foot. My answer is that, to understand barefoot running is to understand how we run and what has happen since the advent of the shoe.

Barefoot Running: Back to the Revolution

With barefoot walking, very little is actually required of the achilles tendon and there is a case for not needing one since our ape cousins either do not have one or it is surprisingly small. To run however requires one of the strongest tendons in the body in the form of the achilles, primarily this is because the achilles is a major part of the mechanism that controls the flexion of the foot, that extends through the arch of the foot via the plantar fascia and controls the arched like formation of the foot and that this ultimately provides the spring in the human running gait, or the ‘push off’ via the big toe.

Therefore most of the load within the human running gait is transmitted through these mechanisms and from the calf muscles and through the joints of the knees and into the pelvis and lower back via the upper leg muscles. However, when we run in shoes, we put a wedge of cushioning material between us and the ground and this causes too things. It potentially allows us to run with more of a heel strike because it desensitises the foot, and secondly it changes the direction of the loadings from the back of the land (posterior) to the front of the leg (anterior). This is why that favourite injury of athletes and military personnel, the “shin splint” has become so common as a diagnosis even though it is a meaningless term that is better referred to as “unexplained lower leg pain”. Whatever the reason, it is an diagnosis for pain in the anterior of the leg caused by increased loadings through an area of the leg that is mostly bone and does not contain sufficient adaptable musculature formations.

Turn the above on its head and think of this in reverse, i.e. when we start running barefoot, we switch the loadings from the anterior back to where it should be, the posterior of the leg. However the human body is not a static system but adapts to use or lack of and therefore the posterior mechanisms that should naturally be able to cope with the load of running barefoot, such as the calf and achilles tendons etc, have atrophied and are not able cope with the new loading of barefoot running. This is true of any runner, even a very experienced and highly trained runner, we all suffer from undeveloped achilles and calf muscles caused by running in shoes because we under-train these bio-mechanisms.

Now going back to the woman running on “Dissected: The Incredible Human Hand and Foot” and her lack of use of the heel whilst running. A new barefoot runner will commonly move on to their forefoot because that gives the them a greater sense of security caused by increased sensitisation of there bare foot on the ground. However this is not natural and only serves to further increase the amount of loading going through the under-developed posterior mechanisms in the leg and severely increases the loading on the bones in the foot that they were never designed to take.

Therefore the use of the heel in barefoot running is just as important as it is in walking. However runners have been led to believe that the heel strike is the cause of all their woes and that barefoot running reduces this effect. This, I believe, is a misunderstanding of the case since I am in favour of the evidence put forward by the Virginia Speed Clinic in the USA whose research has claimed that rather than the initially impact of the heel on the ground being the cause of most runners injuries, it is actually the speed by which the load comes on to the heel. This is caused by a shod runner who seeks to run faster and rather than increase the cadence (i.e. the number of strides per minute that is seen in runners who have traditionally always run barefoot), the runner increases the length of their stride thereby extending their foot further from the central baseline of their body, something you can only achieve with a cushioned heel.

This is why I named the article “Barefoot: Forward the Revolution” because unfortunately runners have tried barefoot running using minimalist shoes and most have suffered accordingly because it has proved to be hard. We are going to therefore have to learn how to walk and run barefoot again if we stand any chance of preventing the increasingly deleterious condition of the human foot and lower leg. But it won’t be easy and it will be a revolution in a very real sense – because we have been here before, give or take 10,000 years.

 

the-shining

Jack Nicholson in “The Shining” demonstrating that just sitting outside in the cold is more likely to bring on death through hypothermia rather than hibernation.

Hibernation is popularly understood to be when an animals curl up asleep sometime at the end of autumn and get up again in early spring, the classic example being the black or brown bear.

However hibernation is more complex than that since it is the suppression/reduction of an animals metabolism in response to an energy crisis i.e. a lack of food and/or water and yet it is in reality only one type of response that falls under the more technical term ‘torpor’, a stress response by animals not exclusively just to winter scenarios, but can also be to summer heat or other environmental conditions, for example the California Ground Squirrel for curls up in a state of torpor when the conditions get too hot.

Although hibernation or torpor is known as a stress response, scientists are unsure what is the internal physiological trigger mechanism that causes the response. The cell biologist Mark Roth has examined the relationship between oxygen and metabolism and  has shown that simply exposing an animal to freezing conditions is more likely to produce a hypothermic rather than a hibernation response, as a warm-blooded animal’s response to cold is to try and maintain core-body temperature by using more energy reserves and oxygen. Instead you have to combine freezing conditions with something that reduces the demand for oxygen. Even less understood is the trigger mechanism that brings animals out of their torpor state.

Most of the research into hibernation and torpor has been carried out on the squirrel family and the Arctic Ground Squirrel has shown that their core body temperatures during hibernation can fall to -2.9 degrees centigrade. Also the Arctic Ground Squirrel is an ‘Obligate Hibernator’ in that they enter this state regardless of the ambient temperature or lack of food – our own European Hedgehog is a more common obligate hibernator that we in the UK are more familiar with. The reverse to an Obligative Hibernator is a Faculative Hibernators which enter hibernation in direct response to environmental conditions such as drought or cold etc, the Black Tailed prairie dog is usually given as the example of this type.

It does get even more complicated than this however, as animals may not remain in their state of hibernation or torpor for the entire time, instead they may only have short bouts of torpor and then rouse for that state. In some cases it is unknown why that occurs and some theories are that there is a requirement for sleep even within torpor and therefore the animal has to come out of torpor in order to sleep – strange as that may sound. Another theory is that immune systems needs to be woken up every-so-often to prevent infection etc and therefore this requires the animal to come out of torpor. Sometimes the reason is more straight-forward as in the chipmunk, since it is unable to build up enough fat reserves for the entire winter and therefore stores food in its burrow and comes in and out of torpor, that may only last a few hours or days, to feed.

Strangely, although bears are usually given as a prime example of animals that hibernate, for a while it was claimed that black and brown bears did not enter a true state to hibernation because their body temperature did not drop significantly  – only 3 – 5 degrees centigrade – and female bears gave birth whilst supposedly in this state. However the definition of hibernation has widened since it was discovered that bear metabolic suppression was not linked so significantly to temperature reduction i.e. an adult bear’s metabolic rate could fall by as much as 25% without significant reduction of core body temperature and therefore bear hibernation is now seen as physiologically different to rodent hibernation witnessed in the various species of Ground Squirrels etc.

Given the above, that there is a wide difference in hibernation and torpor types and states, can humans exhibit hibernation states?

Given our ancestral background as an African arboreal primate that probably stood upright and became hairless in part as a heat response, then you would have thought that we would not have any mechanisms within us necessary for hibernation. Also until the discovery of the dwarf lemur, no primate was known to hibernate. (The Fat-tailed dwarf lemur hibernates in response to drought conditions during the winter period in its home range on the island of Madagascar, even though the daytime temperature can still rise to above 30 degrees centigrade. Biologists call this aestivation rather than hibernation since the response is to arid conditions and high temperatures.)

OK OK, you all say, you have told us a lot about ground squirrels, bears and a fat-tailed lemur, but what about humans?

The evidence for human hibernation is almost totally anecdotal and in very rare exceptional circumstances, almost none of which have been conducted under scientific conditions. Usually the evidence for hibernation is given because it is the only explanation for the person having survived so long in conditions that would have killed within days if not hours. Just look at the cases of Mitsutaka Uchikoshi in 2006 in Japan or Peter Skyllberg in Sweden in 2010 where both survived substantial amounts of time without food and in Mitsutaka’s case unconscious and therefore without fluid either. Both are unproven and in in Peter’s Skyllberg’s case, controversial.

There is more evidence for cold-induced torpor states from people who have fallen into very cold or freezing water and have survived totally submerged for considerable amounts of time before being rescued and successfully revived. An famous example of this Anna Elisabeth Johansson Bågenholm who survived an skiing accident in 1999 which left her trapped under ice for 80 minutes during which her body temperature dropped to 13.7 degrees centigrade and she suffered circulatory arrest after 40 minutes. Her case was so unique that it prompted Swedish hospitals to begin to use ‘therapeutic hypothermia’ to save patients who had suffered circulatory arrest i.e. reducing body temperature and therefore oxygen demand in order to protect the brain from damage due to the lack of oxygen.

Other evidence for human hibernation is completely anecdotal. The British Medical Journal in 2000 re-published an article originally written in 1900 concerning Russian peasants who utilised a form of pseudo-hibernation that they termed ‘iotska – winter sleep’ to make it through the harsh and famine stricken Russian winters. At the first sign of snow, the family would gather around the stove and fall asleep, waking up once a day to eat a piece of black bread, that had been made in the autumn, washed down with water. Then the family would fall back asleep. Obviously this is a different survival strategy rather than true hibernation as the tactic is to lower the metabolic rate through the use of sleep. After six months the family would wake up and return to it tasks of planting and other farming activity for the new growing season and preparing for the next on-coming winter.

Unfortunately, although this was published in the peer reviewed and respected BMJ, we are not sure if this story is true or simply a ‘urban myth’ – perhaps a ‘rural myth’ given we are talking about Russian peasants. However if it is true, it would still be simply the case of people sleeping rather than hibernation as they would not be subjected to a lowering of core body temperature as they were gathered around the stove.

Perhaps more interesting is turning to studies on ‘induced human hibernation’, what is more popularly known as “suspended animation” and has appears in science-fiction literature and films as a way of preserving humans on interstellar journeys through space – think of the character Ripley in the Film Aliens – but has been seen as potentially useful for terminally ill patients whilst waiting for a treatment to be developed in a process called cryonics. Unfortunately at the moment cryonics causes irreversible damage  – although the popularly held belief that the formation of crystals within the cells causes them to burst is false – and patients of cryonics are considered legally ‘dead’.

Successful studies have been done on dogs and pigs, but at first glance these studies are not very relevant as the process involves removing the total blood volume from the animal and replacing it with a low temperature solution and then reversing the process. Other studies have used mice which were exposed to hydrogen sulphide to reduce the oxygen demand whilst at the same time reducing the air temperature, however unfortunately the same process failed when applied to larger animals such as pigs and sheep. (A little bit more ‘unfortunate’ for the pigs and sheep that were experimented upon). However perhaps the relevance for humans of these studies on these ‘poor and unfortunate’ animals is that dogs, pigs, mice and sheep do not show a natural tendency to hibernate.

In humans apart from the anecdotal evidence above, and very rare occasions where people have survived falling into extremely cold water, the only research – private and as yet unpublished –  has been the recent and on-going Hibernaut project being conducted by the Australian based Dive Lab which is using the ability of free divers  – divers who hold his or her breath until resurfacing rather than on the use of a breathing apparatus  – to reduce metabolic rate, brain temperature and heart rate at will. Certainly free-diving has the same concerns upon reducing the human body’s metabolic and therefore oxygen requirement and therefore free-diving performance is concentrated around the human body’s reflex oxygen-conserving adaptions i.e.: Drop in heart rate, body cooling through peripheral vasoconstriction, increase in oxygen-carrying red blood cells (haemoglobin) through Splenic contraction and blood shift where the blood flow is concentrated around vital organs, which also helps with pressure compensation.

The Hibernaut project has concentrated upon the free-divers ability to reduce brain temperature since this is crucial for the development of a hibernation-like state and  in 2012 they demonstrated a: “Successful, first-in-human (N=1) exploratory pilot study demonstrating a sustainable torpid metabolic state. [in a human]”, where the free-diver reduced and held brain temperature by 4 degrees centigrade for 15 minutes. In 2014 they are set for “Hibernaut-X”, which as they put it is “….a `dive´ into suspended animation (?), hibernation (?), and other very `cool´ stuff.”

Therefore the evidence at the moment is very much anecdotal for human hibernation, particularly outside of laboratory conditions or instances where the patient has been plunged into freezing water. However survival strategy based on a reducing metabolic load using sleep rather than hibernation would seem to hold more credence and does not bend the rules of human physiology, however the line between lying down to sleep and lying down to die must be a very thin one in a survival situation and I am not sure how that could ever be effectivity controlled.

For survival courses and training please visit http://www.farafoot.co.uk

Blizzard Conditions in March 2013

Blizzard Conditions in March 2013

Given that we have lived and worked outside nearly all our lives, it is not surprising that we have quite often felt wet, cold, tired, hungry, feeling like crap, wanting to stop what we are doing and go home or to the pub!

It is not much of a shocker therefore to learn that cold has a rather negative affect on physical and mental performance in humans, unsurprising given that we are two-legged primates designed for sub-Saharan forests and savannahs i.e. we are of slender build, low muscle mass, hairless and sweat quite a lot, all factors that are advantageous to a hunter of equatorial climes but less so for cooler and wetter conditions.

As our modern human ancestor, Mr. and Mrs. Homo Sapien, only left Africa 30,000 years ago – a blink of a biologist’s eye – we have only made a few physiological adaptations to cope with these newer less suitable climates. Instead, because of our greater brainpower, we have made far more behavioural adaptations, in that we have put on more clothes, footwear and heated our houses etc the further we moved away from the tropics.

Given our nakedness, feeble frames and propensity to sweat it is not surprising that we are still susceptible to cold, wet and windy conditions, should we stand out in it for too long unprotected. However what is surprising is what constitutes ‘challenging’ conditions for us humans and this is important for anyone who ventures out into the big wide natural world.

Ok, so research has shown that reduced mental performance in tasks such as reasoning, memory and learning begin to be significant in humans at 18 degrees centigrade and below and that substantial cognitive impairment in humans occurs at temperature of 10 degrees centigrade and below. Both temperatures we in the UK would consider fairly mild if not a decent summers day. It is not surprising that brain performance is an initial symptom of the negative affect of cold as our large brains require a lot of energy – 25% of total energy requirements for the entire body.

However cold response is not just due to cold weather, but rather a combination of environmental, physiological and psychological factors:

Air temperature
How wet and windy the conditions are
The amount and type of clothing worn
The health, fitness and age of the individual
How sleep deprived the individual may be
Physical and mental tiredness
Diet
Hydration
The time of day or night
The task being performed by the individual
The general moral of the individual – which is usually linked to the factors above

It is known for example that circadian rhythms (an endogenous, entrainable oscillation) or more plainly your biological 24-hour clock, have a notable affect on human response to cold in that the same amount of physical activity carried out at night as during the day does not produce the same rise in body temperature.

We are a warm-blooded animal that maintains our core body temperature through homeostasis or thermoregulatory systems to somewhere between 36.5 and 37.5 degrees centigrade. A drop in core body temperature of only 1.5 to 2 degrees is therefore dangerous for us since the body begins to struggle to maintain normal metabolism and body functions and develops a condition we call hypothermia.

Hypothermia is separated into four bands of mild, moderate, severe and profound and the human body begins to show symptoms of substantial hypothermia when the core body temperature has dropped below 35 degrees centigrade although hypothermia can present symptoms earlier than this although they may be fairly vague, such as mild shivering and general apathy.

With moderate hypothermia the signs become more obvious with more violent shivering, lack of co-ordination, fingers and toes becoming blue in colour and the face turning pale as blood vessel constrict to try and maintain the warm blood in core of the body around the major organs.

As the core body temperature drops further so more severe hypothermia symptoms become apparent. Heart rate, respiration and blood pressure drop, outwardly, speech and thinking become sluggish, exposed skin becomes blue and puffy and muscle co-ordination becomes very poor making walking almost impossible. As profound hyperthermia sets in so the individual might in the last stages of hypothermia and before death, exhibit paradoxical undressing where they discard their clothes, and/or terminal burrowing (hide-and-die syndrome) where they enter small-enclosed spaces.

The lowest core body temperature from which someone has been revived is 13 degrees centigrade and this was a seven-year-old Swedish girl who fell into sea off the western coast of Sweden in 2011, an exceptional event that is probably explained because the girl was already cold before she entered the water. Hypothermia through immersion in water is far more profound since the rate of heat loss is vastly increased – water temperature of 10 degrees can kill within one hour – however death through reactions brought on by cold shock from the initial immersion in freezing water tends to be more common than hyperthermia itself.

Hypothermia is however a relative term as an exercising body will fairly rapidly increase core body temperature but when subjected to a core temperature drop of 2 degrees, might only decrease the core temperature a degree below the body’s normal temperature. However the body is still at risk of developing symptoms of hypothermia and therefore it might be might be prevalent in people with fairly normal core body temperatures, if they have been engaged in physical activity.

Something else to bear in mind with hypothermia and more precisely with shivering, is that whilst the body will burn lipid stores i.e. fats to fuel low level shivering, the body fuels high intensity shivering, that someone will develop during hyperthermia, through the usage of glycogen stores. This is an issue in situations where glycogen stores within the muscles and liver are already low or non-existent due to physical activity or diet, then the body will struggle to alleviate a drop in core body temperature by shivering and therefore hyperthermia will become more severe as the body becomes colder.

There is also a difference between children and adults which makes them more at risk from developing hyperthermia since children have lower body mass compared to surface area, less body fat – particularly in girls and a higher metabolic cost of physical activity, all biological factors that adds strain on the thermoregulatory system of children’s bodies. Therefore in a cold environment, children run the risk of lower skin temperatures that is a result of greater vasoconstriction, but conversely their metabolic heat is increased in the cold to a greater extent than that of adults, which appears to be sufficient to maintain their body temperature during exercise but not during prolonged rest.

Finally, what do you do with someone with hypothermia? I have had symptoms of hypothermia and my experience of it is that is begins to run away with itself in that because it causes increases in lack of muscle co-ordination, apathy and mental confusion you become increasingly unable to solve the situation yourself. With my experience of it, I have caught the symptoms early on and have managed to alleviate them quickly myself, but at the time I knew that I was close to not being able to do so.

Therefore what you strive towards is not getting into a situation in the first place where you risk hyperthermia, however the obvious solution if you do is to get them somewhere warm and dry and to get them out of any wet clothes and get warm fluids inside them, such as a nice cup of tea. If a place indoors is not available, then a tent and a sleeping bag is a second option with perhaps someone else inside the sleeping bag with the patient to add their body warmth with the same requirement to get them out of their wet clothes since water is such a good conductor of heat away from the body.

Severe hypothermia is going to require specialist care since re-warming can open up blood vessels and thereby causing blood to enter very cold areas of the body resulting in cold blood returning to the heart thereby dangerously reducing core body temperature further Also other symptoms of hypothermia might include frostbite which should not be warmed in camp but also require specialist care.

Lastly, someone who has suffered from hypothermia is probably not going much further even when they have been warmed up and therefore needs to be evacuated. If that is not possible, then at the very least they should spend a 24 – 48 hours being treated for the symptoms before even attempting to carry on.