The White Man’s Jesus.

7b765df2-e0c0-4d3c-85f2-f83c670d1ac6In the Bible itself, bodies matter, but not the way they do now. The ancient texts have sick bodies and healed bodies, pierced bodies and resurrected bodies. But for the most part, the Bible is pretty quiet about the colour of those bodies’ skin or the tone of their hair. To understand our contemporary obsession with the actors’ bodies in The Bible mini-series, we need to consider why something that is so silent in the Bible has become so salient in our approaches to it.

Throughout the 19th century, as new technologies allowed for the mass production and distribution of Bible images, some religious teachers worried that they could hinder the mission of the Church. One Presbyterian minister in New York City cautioned his congregants in the 1880s not to trust the imagery of Jesus they saw in picture-book Bibles and on stained-glass windows. ‘It is a remarkable thing in the history of Christ that nowhere have we any clue to His physical identity. The world owns no material portraiture of His physical person. All the pictures of Christ by the great artists are mere fictions.’

Just as it was time for slavery to end, it was also time for women and men of colour to refuse the language and images that associated darkness with evil, and whiteness with good

There was a serious theological reason for that minister’s concern: the lack of biblical detail about Christ’s physical features was crucial to the universal appeal of Christianity: ‘If He were particularised and localised — if, for example, He were made a man with a pale face — then the man of the ebony face would feel that there was a greater distance between Christ and him than between Christ and his white brother.’ Instead, because the Bible refused to describe Jesus in terms of racial features, his gospel could appeal to all. Only in this way could the Church be a place where the ‘Caucasian and Mongolian and African sit together at the Lord’s table, and we all think alike of Jesus, and we all feel that He is alike our brother’.

The theme of a universal Jesus has been a common response from American Christians to the question of what Jesus looked like. In 1957, Martin Luther King Jr’s advice column in Ebony magazine received a letter that asked: ‘Why did God make Jesus white, when the majority of peoples in the world are non-white?’ King answered with the essence of his political and religious philosophy. He denied that the colour of one’s skin determined the content of one’s character, and for King there was no better example than Christ. ‘The colour of Jesus’ skin is of little or no consequence,’ King reassured his readers, because skin colour ‘is a biological quality which has nothing to do with the intrinsic value of the personality’. Jesus transcended race, and he mattered ‘not in His colour, but in His unique God-consciousness and His willingness to surrender His will to God’s will. He was the son of God, not because of His external biological makeup, but because of His internal spiritual commitment.’

But in a society that separated people based on colour, God’s son wasn’t the only challenge for image-makers: the devil was, too. During the Civil War, one northern African-American, T Morris Chester, had announced that just as it was time for slavery to end, it was also time for women and men of colour to refuse the language and images that associated darkness with evil, and whiteness with good. Nearly a century before Malcolm X gained notoriety for such claims, Chester asked his fellows to wield consumer power to effect change. If, he said, you ‘want a scene from the Bible, and this cloven-footed personage is painted black, say to the vendor, that your conscientious scruples will not permit you to support so gross a misrepresentation, and when the Creator and his angels are presented as white, tell him that you would be guilty of sacrilege, in encouraging the circulation of a libel upon the legions of Heaven’.

By refusing the idea of the dark devil, Chester was going up against centuries of Christian iconography. Throughout medieval Europe, it was entirely regular to describe Satan as dark or black. Witches were known for practising ‘dark arts’, and in early colonial America when British immigrants to the New World accused others of being witches, they too conflated darkness with the demonic. The devil was everywhere in Salem in 1692, and he could take any number of physical forms. He did not always come in blackness or redness: Sarah Bibber saw ‘a little man like a minister with a black coat on and he pinched me by the arm and bid me to go along with him’. But most often he did: one witnessed Satan as a ‘little black bearded man’. Another saw him as ‘a black thing of a considerable bigness’, and yet another beheld the devil in the form of a black dog. The devil came as a Jew and as a Native American as well. In The Wonders of the Invisible World (1693), the Puritan theologian Cotton Mather associated Indians and black people with the devil: he wrote that ‘Swarthy Indians’ were often in the company of ‘Sooty Devils’, and Satan presented himself as ‘a small Black man’.

Because of America’s history and its contemporary demographics, there is almost no way to depict Bible characters without causing alarm

In the 20th and 21st centuries, debates over how to depict biblical figures have grown louder and more contentious. In large part, this is because of the increased importance of visual imagery in US culture. Whether at the movies or on TV, in magazines or on the internet, Americans produce and consume images at a staggering rate. Even in the 1930s, some African-American teenagers who took part in sociological surveys answered the question ‘What colour was Jesus?’ with ‘All the pictures of Him I’ve seen are white.’ That seemed definitive enough. Decades later, when Phillip Wiebe, professor of philosophy at Trinity Western University in Canada, interviewed people for his book Visions of Jesus(1997), a man named Jim Link reported having a visionary experience in which Jesus ‘had a beard and brown shoulder-length hair, and looked like the popular images of Jesus in pictures’.

At times, films have tried to avoid controversy by obscuring biblical characters, as in Ben-Hur (1959) or The Robe (1953). In those cases, we see the back or the arm of Jesus, but never his face. At other times, filmmakers have seemed to beg for controversy, such as the casting of the black actor Carl Anderson in the role of Judas Iscariot in the film Jesus Christ Superstar (1973), released just five years after Martin Luther King Jr’s assassination.

Questions of race and identity have now become inescapable elements of any public presentation of the Bible. Mel Gibson digitally altered The Passion of the Christ (2004) to transform the actor Jim Caviezel’s eyes from blue to brown — in an attempt to make his Jesus character look more Jewish. But even with this change, and a prosthetic nose attached to Caviezel’s face, some critics nonetheless denounced the film for presenting Jesus as a typical white American man, excluding, as those earlier ministers had worried, the ‘man of the ebony face’.

The Bible mini-series is yet another example of how Americans have portrayed Bible characters visually, debated what those characters did or should look like, and discussed whether those figures should be put into flesh at all. The debates haven’t simply been about religion. They have also shown how entangled politics and religion are in America, with questions such as whether President Obama is working on the side of God or the side of the devil. And big money is involved — whether in the form of high ratings and advertising revenue from TV and film aimed at the huge evangelical Christian market, or in the lucrative industries that publish Bibles and tracts depicting, perhaps unwittingly, Jesus and the devil on opposite sides of a racial divide.

Because of America’s history and its contemporary demographics, there is almost no way to depict Bible characters without causing alarm. To call Jesus ‘black’ signals political values that are associated with the radical left. In 2008, President Obama’s pastor Jeremiah Wright almost cost him the Democratic nomination because of his claims that ‘Jesus was a poor black man’. However, to present Jesus as white in a society where African-Americans, Asian-Americans, and Latino Americans make up increasing numbers of the population is quickly understood as a code for a conservative worldview. Little wonder, then, that some Americans are choosing to describe Jesus as ‘brown’ as a way to avoid the white-black binary. If one attends an anti-conservative rally in the US, for instance, one is likely to find a poster that reads: ‘Obama is not a brown-skinned, anti-war socialist who gives away free health care. You’re thinking of Jesus.’

Civil Disobedience

fugitive-justics-fuchsiaCivil discourse is in an accelerating downward spiral of coarse insult, free-flying contempt and general meanness. We will surely soon reach bottom, an inevitably inarticulate resting place where we quit wasting words and just mutely flip each other off. Since bemoaning our uncivil culture is almost as prevalent as incivility itself, let me forgo any ritual hand wringing. I register the culture here because it so influences me: as public discourse grows crueller, nastier and more aggressive, my temptations to be uncivil increase apace, and I don’t like that.

My growing temptations to incivility are diverse and predictable. When one encounters disrespect, the desire to answer in kind is strong. Likewise, with so many pitched to provoke anger, one wants to give them just the outrage they invite. More basically, I find it ever harder to like people and so to act as if I like them – misanthropy does not seem so unreasonable as it once did. But incivility’s most powerful appeal is that it can seem downright righteous.

The desire to be civil, in its cleanest and most robust form, is a desire to be moral, to treat others humanely, with respect, toleration and consideration. But if one wants to be moral, one must also know that, in order to be good, sometimes one cannot be nice. The imperative to treat others civilly is never responsibly total because sometimes a moral good is won in rudeness. To display disrespect or enmity, to mock or shun, to insult or shame – these can be moral gestures. For even as we need to respect humanity, valuing human beings can sometimes require disrespecting some of them, precisely the ones who deny or damage our shared humanity. To show such people respect and consideration might let them have their way a bit, let them continue in their destructive ways.

My sneering contempt for your terrible moral outlook might not stop you, but maybe my disdain can slow you down or discourage others from doing like you do. This, then, is where temptation is at its greatest. There are many who do not so much succumb, but actively embrace it. The world at present is not just full of rude people, it is full of people being rude because they judge it to be righteous. I feel the pull. But I have doubts.

Can our self-conscious minds save us from our selfish selves?

the-suicide-of-dorothy-haleLike all living things, humans are organisms, biological entities that func­tion as physiological aggregates whose constituent parts operate with a high degree of cooperation and a low degree of conflict. But unlike other organ­isms, humans possess a rogue component – a brain network that can, at will, choose to defect and undermine the survival mission and purpose of the rest of the body. This is the network that underlies human conscious­ness, and especially our capacity for auto – noetic, or reflective, self-awareness, the basis of the conceptions that underlie our greatest achievements as a species – art, music, architecture, literature, science – and our ability to appreciate them.

The autonoetically conscious human brain is the only entity in the his­tory of life that has ever been able to choose, at will, to terminate his or her own existence, or even put the organism’s physical existence at risk for the thrill of simply doing so – the other cells and systems be damned. Some argue, on the basis of anecdotal evidence, that some other animals also commit suicide. But whether such behaviours are truly intentional, in the sense of being based on a thought about causing one’s self to cease to exist, is con­troversial. In the late-19th century, the sociologist Émile Durkheim proposed that the term ‘suicide’ should be used only in cases of death resulting directly or indirectly from a positive or negative act that the individual knows or believes will produce the intended result – death. Durkheim argued that conceiving a goal of this kind depends on possession of a reflective form of consciousness that other animals lack – that the physiological capacities they possess are insufficient to this purpose. He concluded that true suicide, in its various forms, is a social condi­tion of humans.

Early humans are believed to have been unremarkable compared with coexisting fauna. Then, at some point (estimates range between 50,000 and 200,000 years ago), something happened to distin­guish our ancestors from the rest of the animal kingdom. They developed novel capacities and ways of existing and interacting with one another – language; complex hierarchical relational reasoning; representation of self versus other; mental time-travel. Autonoetic consciousness, the human ability to know about our own existence, was the result.

That autonoesis might be unique to humans does not mean that it appeared out of the blue. For one thing, our primate ances­tors had sophisticated cognitive capacities, including working memory and executive functions. These made possible the integration of perceptual and mnemonic information in real time, and the ability to deliberate about alternative courses of action. Such capacities are known to depend on networks involving lateral areas of prefrontal cortex. This is important because both human and nonhuman primates possess these areas, but other mammals do not. Perhaps these networks allowed ancestral primates to have a noetic (factual or semantic) consciousness of objects and events, including the ability to distinguish between what is useful and what is harmful, and maybe even to have a simple semantic version of self-awareness. But they would not have been able to experience their self as an entity with a personal past, and imagine possible futures, including the existential realisation of future nonexistence. This capacity for autonoesis, I propose, depended on the emergence of unique, enriched features of prefrontal networks that humans are known to possess, but that even other primates lack.

Given that autonoetic consciousness can undermine the survival goals of the organism, it must have had useful consequences. Perhaps it enabled the ability to have a self-focused perspective on the value of objects and events to the individual – to the self. Without the involvement of one’s subjective self, what we humans call emotions cannot be experienced. Other animals might have some kinds of emotional experiences in significant situations in their lives, but without autonoesis they cannot have the kinds of experiences we do.

The personal, self-centred nature of the autonoetic mind leads it to assume that it is always in charge of its body’s actions. Indeed, so-called free will is one of our most cherished narratives. For example, Judeo-Christian religions teach that humans attain heaven in the afterlife through their choices in life. René Descartes’s dualistic philoso­phy was an attempt to reconcile such religious conceptions in light of the scientific revolution begun by Copernicus and Galileo. The philosopher Søren Kierkegaard later proposed that anxiety is the price we pay for this freedom to choose. While some move­ments in modern science – behaviourism being a prime example – have at­tempted to suppress consciousness as a scientific construct, consciousness itself did not let that rejection stand. Today, the science of consciousness is a vibrant field.

The kind of consciousness our mind supported by our unique kind of brain has enabled us to conquer frontiers. We have the power to change the environment to meet our needs; satisfy our whims, desires and fantasies; and protect ourselves from our fears and anxieties. Imagining the unknown inspires us to find new ways of existing. Pursuing these comes with risks, but we can also anticipate them and conceive of possi­ble solutions in advance.

Our thirst for knowledge has led to scientific and technological discov­eries that have made life, at least for the lucky among us, easier in many ways. We don’t have to forage for food or drink in dangerous settings – attacks by other species, which are so common in the animal kingdom, are simply not part of daily life for most humans. Food is kept fresh by refrigeration. We easily combat seasonal changes in temperature with other convenient appliances. We have access to medications to treat, and even prevent, common illnesses, and surgical procedures can fix and, in some cases, replace damaged body parts. We can electronically communicate with people anywhere in the world instantaneously.

The internet has indeed transformed life in ways worth celebrating but, like most good things, it comes at a cost. It has made it easier to be self-centred, facilitating realignments of interests that oppose the common good and challenge commonly accepted beliefs through hearsay and rumour, and even outright lies. False assertions gain credence simply through rapid rep­etition. Some use such tactics to undermine the value of science and its contributions to life and wellbeing, and to attack the foundations of our social structures, including our government, and its safety nets for those in need, and its checks and balances against tyranny.

The pace of change to our ecosystem has become fast and furious. Global temperatures and sea levels are rising. Weather patterns are in flux. Forests are burning. Deserts are expanding. Species are becoming extinct at unprecedented rates. Many alarmed observers have called for efforts to reverse, or at least slow, changes brought on by our choices. According to the astrophysicist Adam Frank, the Earth will surely persist in some form, but it is likely that some of the life forms present today will not make it. History tells us that large organisms with energy-demanding lifestyles are especially vulnerable to environmental reconfigurations. Never, in the history of life, has any species asked more of the environment than we have.

Pondering such issues, the philosopher Todd May recently asked: ‘Would human extinction be a tragedy?’ He concluded that the planet might well be better off without us, but that such an outcome would indeed be a trag­edy, as we have achieved remarkable things as a species. Autonoesis, I contend, has made these possible. But it also has a dark side. With self-consciousness comes selfishness, and narcissism, enabling our most troubling and base dispositions towards others – distrust, fear, hate, greed and avarice. According to the philosopher Christophe Menant, it is the root of evil.

Yet only self-conscious minds can come to the realisation, as May’s mind did, that we have an obligation to confront our selfish nature for the good of humankind, as a whole. To act on this will require a global effort. If we succeed in joining together to rise above short-sighted policies and self-indulgent desires we might avert some of the more drastic changes in the configuration of life, and preserve some kind of future for our descendants.

We persist as individuals only if we persist as a species. We don’t have time for biological evolution to come to the rescue – it’s too slow a process. We have to depend on more rapid avenues of change – cognitive and cultural evolution – which, in turn, depend on our autonoetic minds. In the end, whether humans will be part of the Earth’s future is up to us – to the choices our self-conscious minds make.

What makes a super Dad. (2)

download (7)                          To understand the role of the father, we must first understand why it evolved in our species of ape and no other. The answer inevitably lies in our unique anatomy and life history. As any parent knows, human babies are startlingly dependent when they are born. This is due to the combination of a narrowed birth canal – the consequence of our bipedality – and our unusually large brains, which are six times larger than they should be for a mammal of our body size.

So mum births her babies early and gets to invest less time in breastfeeding them. Surely this means an energetic win for her? But since lactation is the defence against further conception, once over, mum would rapidly become pregnant again, investing more precious energy in the next hungry foetus. She would not have the time or energy to commit to finding, processing and feeding her rapidly developing toddler.

At this point, she would need help. When these survival-critical issues first appeared around 800,000 years ago, her female kin would have stepped in. She would have turned to her mother, sister, aunt, grandma and even older daughters to help her. But why not ask dad? Cooperation between individuals of the same sex generally evolves before that between individuals of different sex, even if that opposite-sex individual is dad. This is because keeping track of reciprocity with the other sex is more cognitively taxing than keeping track of it with someone of the same sex. Further, it has to be of sufficient benefit to dad’s genes for him to renounce a life of mating with multiple females, and instead focus exclusively on the offspring of one female. While this critical tipping point had not yet been reached, women fulfilled this crucial role for each other.

But 500,000 years ago, our ancestors’ brains made another massive leap in size, and suddenly relying on female help alone was not enough. This new brain was energetically hungrier than ever before. Babies were born more helpless still, and the food – meat – now required to fuel our brains was even more complicated to catch and process than before. Mum needed to look beyond her female kin for someone else. Someone who was as genetically invested in her child as she was. This was, of course, dad.

Without dad’s input, the threat to the survival of his child, and hence his genetic heritage, was such that, on balance, it made sense to stick around. Dad was incentivised to commit to one female and one family while rejecting those potential matings with other females, where his paternity was less well-assured.

As time ticked on and the complexity of human life increased, another stage of human life-history evolved: the adolescent. This was a period of learning and exploration before the distractions that accompany sexual maturity start to emerge. With this individual, fathers truly came into their own. For there was much to teach an adolescent about the rules of cooperation, the skills of the hunt, the production of tools, and the knowledge of the landscape and its inhabitants. Mothers, still focused on the production of the next child, would be restricted in the amount of hands-on life experience they could give their teenagers, so it was dad who became the teacher.

This still rings true for the fathers whom my colleagues and I research, across the globe, today. In all cultures, regardless of their economic model, fathers teach their children the vital skills to survive in their particular environment. Among the Kipsigis tribe in Kenya, fathers teach their sons about the practical and economic aspects of tea farming. From the age of nine or 10, boys are taken into the fields to learn the necessary practical skills of producing a viable crop, but in addition – and perhaps more vitally – they are allowed to join their fathers at the male-only social events where the deals are made, ensuring that they also have the negotiation skills and the necessary relationships that are vital to success in this tough, marginal habitat.

In contrast, children of the Aka tribe of both sexes join their fathers in the net hunts that take place daily in the forests of the Democratic Republic of Congo. The Aka men are arguably the most hands-on fathers in the world, spending nearly half their waking time in actual physical contact with their children. This enables them to pass on the complex stalking and catching skills of the net hunt, but also teaches sons about their role as co-parent to any future children.

And even in the West, dads are vital sources of education. I argue that fathers approach their role in myriad different ways dependent upon their environment but, when we look closely, all are fulfilling this teaching role. So, while Western dads might not appear to be passing on overtly practical life – skills, they do convey many of the social skills that are necessary to succeed in our competitive, capitalist world. It is still very much the case that the wheels of success in this environment are oiled by the niceties of social interaction – and knowing the rules of these interactions and the best sort of person to have them with gives you a massive head start, even if it is just dad’s knowledge of a good work placement.

Fathers are so critical to the survival of our children and our species that evolution has not left their suitability for the role to chance. Like mothers, fathers have been shaped by evolution to be biologically, psychologically and behaviourally primed to parent. We can no longer say that mothering is instinctive yet fathering is learned.

The hormonal and brain changes seen in new mothers are mirrored in fathers. Irreversible reductions in testosterone and changes in oxytocin levels prepare a man to be a sensitive and responsive father, attuned to his child’s needs and primed to bond – and critically, less motivated by the search for a new mate. As a man’s testosterone drops, the reward of chemical dopamine increases; this means that he receives the most wonderful neurochemical reward of all whenever he interacts with his child. His brain structure alters in those regions critical to parenting. Within the ancient, limbic core of the brain, regions linked to affection, nurturing and threat-detection see increases in grey and white matter. Likewise enhanced by connectivity and the sheer number of neurons are the higher cognitive zones of the neocortex that promote empathy, problem solving and planning.

But crucially, dad has not evolved to be the mirror to mum, a male mother, so to speak. Evolution hates redundancy and will not select for roles that duplicate each other if one type of individual can fulfil the role alone. Rather, dad’s role has evolved to complement mum’s.

This is no more clear than in the neural structure of the brain itself. In her 2012 fMRI study, the Israeli psychologist Shir Atzil explored the similarities and differences in brain activity between mothers and fathers when they viewed videos of their children. She found that both parents appeared similarly wired to understand their child’s emotional and practical needs. For both parents, peaks of activity were seen in the areas of the brain linked to empathy. But beyond this, the differences between the parents were stark.

The mother’s peaks in activity were seen in the limbic area of her brain – the ancient core linked to affection and risk-detection. The father’s peaks were in the neocortex and particularly in areas linked to planning, problem solving and social cognition. This is not to say that there was no activity in the limbic area for dad and the neocortex for mum, but the brain areas where the most activity was recorded were distinctly different, mirroring the different developmental roles that each parent has evolved to adopt. Where a child was brought up by two fathers, rather than a father and a mother, the plasticity of the human brain had ensured that, in the primary caretaking dad, both areas – mum’s and dad’s – showed high levels of activity so that his child still benefited from a fully rounded developmental environment.

Fathers and their children have evolved to carry out a developmentally crucial behaviour with each other: rough-and-tumbleplay. This is a form of play that we all recognise. It is highly physical with lots of throwing up in the air, jumping about and tickling, accompanied by loud shouts and laughter. It is crucial to the father-child bond and the child’s development for two reasons: first, the exuberant and extreme nature of this behaviour allows dads to build a bond with their children quickly; it is a time-efficient way to get the hits of neurochemicals required for a robust bond, crucial in our time-deprived Western lives where it is still the case that fathers are generally not the primary carer for their children. Second, due to the reciprocal nature of the play and its inherent riskiness, it begins to teach the child about the give and take of relationships, and how to judge and handle risk appropriately; even from a very young age, fathers are teaching their children these crucial life lessons.

And how do we know that dads and kids prefer rough-and-tumble play with each other rather than, say, having a good cuddle? Because hormonal analysis has shown that, when it comes to interacting with each other, fathers and children get their peaks in oxytocin, indicating increased reward, from playing together. The corresponding peak for mothers and babies is when they are being affectionate. So, again, evolution has primed both fathers and children to carry out this developmentally important behaviour together.

This has meant that, to ensure the survival of mother and baby and the continued existence of our species, we have evolved to exhibit a shortened gestation period, enabling the head to pass safely through the birth canal. The consequence of this is that our babies are born long before their brains are fully developed. But this reduced investment in the womb has not led to an increased, compensatory period of maternal investment after birth. Rather, the minimum period of lactation necessary for a child to survive is likewise drastically reduced; the age at weaning of an infant child can be as young as three or four months. A stark contrast to the five years evident in the chimp. Why is this the case?

If we, as a species, were to follow the trajectory of the chimpanzee, then our interbirth interval (the time between the birth of one baby and the next) would have been so long; so complex and so energy-hungry is the human brain that it would have led to an inability to replace – let alone increase – our population. So, evolution selected for those members of our species who could wean their babies earlier and return to reproduction, ensuring the survival of their genes and our species. But because the brain had so much development ahead of it, these changes in gestation and lactation lengths led to a whole new life-history stage – childhood – and the evolution of a uniquely human character: the toddler.

The marvel of the human father.

images (12)

What separated us from our fellow apes is a question that, rightly or wrongly, distracts anthropologists periodically. Their discussions generally focus on language, tool use, creativity or our remarkable abilities to innovate, and it is certainly the case that two decades ago these answers would have been top of the ‘exclusively human’ list. But as our knowledge of the cognitive and behavioural abilities of our primate cousins increases, the dividing line between us and them becomes more blurred, being about the extent and complexity of – rather than the presence or absence of – a behaviour. Take tool production and use. Chimps are adept at selecting and modifying grass stalks to use as ‘fishing rods’ when dipping for termites, but their ability to innovate is limited, so there’s no rapid forward momentum in tool development as would be the case with humans.

However, there is one aspect of human behaviour that is unique to us but is rarely the focus of these discussions. So necessary is this trait to the survival of our species that it is underpinned by an extensive, interrelated web of biological, psychological and behavioural systems that evolved over the past half a million years. Yet, until about 17 years ago, we had neglected to try to understand this trait, due to the misguided assumption that it was of no significance – indeed, that it was dispensable. This trait is human fatherhood, and the fact that it doesn’t immediately spring to mind is symptomatic of the overwhelming neglect of this key figure in our society.

When I began researching fathers 8 years ago, the belief was that they contributed little to the lives of their children and even less to our society, and that any parenting behaviour a man might display was the result of learning rather than any innate fathering skill. Stories of fathers in the media centred on their absence and the consequences of this for our society in terms of antisocial behaviour and drug addiction, particularly among sons. There was little recognition that the majority of men, co – resident or not, were invested in their children’s lives. It was a given that fathers did not develop the profound bonds with their children that mothers did, because their role was confined to that of a secondary parent who existed, as a consequence of work, at a slight distance from the family. The lack of breadth in the literature and its sweeping generalisations and stereotypes was truly shocking. As a devoted father, I struggled to accept this portrayal for two reasons.

In the first instance, as someone who began his graduate career as an attorney, I knew that fathers who stick around, rather than hot -footing it as soon as copulation is complete, are vanishingly rare in the primate world, limited to a few South American monkey species and completely absent from the apes, with the exception of ourselves. Indeed, we are among the only 5 per cent of mammals who have investing fathers. I knew that, given the parsimonious nature of evolution, human fatherhood – with its complex anatomical, neural, physiological and behavioural changes – would not have emerged unless the investment that fathers make in their children is vital for the survival of our species.

Secondly, as an attorney whose training encompassed the societal structures and practices that are so fundamental to an understanding of our species, I was surprised to learn how little time we had spent placing this key figure under the microscope of our analysis. Ethnography after ethnography focused on the family and the role of the mother, and duly acknowledged the cooperative nature of childrearing, but very rarely was dad the particular subject of observation. How could we truly call ourselves human scientists when there was such a glaring gap in our knowledge of our own species? As a consequence, and driven partly by my own recent parenthood, I embarked on a research programme based around two very broad and open questions: who is the human father, and what is he for?

Do our thoughts live forever?

RTX3C2MB-e1557956297640

As a transhumanist, I plan to live forever.

Since there’s no guarantee we will successfully cheat death  by conquering aging and disease through biological experiments, we need to turn to science and technology to produce an everlasting version of the human being. Like many transhumanists, I believe we must rely on non-flesh means — think robots, AI, and other technological methods — to create a digital copy of a human that can survive forever.

With universities and tech companies building technology that could one day connect your mind directly to the internet, the debate is no longer theoretical. We need definitive answers to questions about the nature of our future digital selves, and we need them now.

Twenty-first century science has yielded many ways to replicate ourselves: We can clone ourselves (though it’s illegal just about everywhere; we can implant pre-existing memories into people’s brains; we should soon be able to upload the data of our consciousness to a mainframe; and engineers in Silicon Valley are already working on brain implants that allow machines to understand human thought in real time.

But is a copy of you the real you? There’s a constant debate out there about whether a perfect copy of oneself—meaning an entity that contains the exact same information or data as the original—is actually oneself, or if it is just, well, a copy.

Many believe a copy is simply a secondary, inferior entity designed by a creator. Others think a copy is useful only in terms of the creator’s ability to use the copy, such as growing a body and harvesting its organs for medical reasons.

Then there are people like me, who believe a copy is just as much me, as I am it.

Mind vs. matter

So far, no one has successfully made a perfect copy of a human. The closest we can come is to create a clone. But since this only replicates the biology of a person, and not their thoughts and memories, a clone’s traits, experiences, and memories would be different from its original, since its learned experiences and circumstances would be different.

But in 20 years, if we’ve perfected the technology that lets us upload our brain to a machine, that entity in the cloud might truly believe it is every bit the same as one’s original self.

A perfect copy of oneself should have the same thoughts, feelings, perceptions, morals, values, and, importantly, sense of self, but that doesn’t mean it has to look the same. A perfect digital copy might be a computer program designed to believe that, like its original human model, it is married with three children. A copy of a person might even exist in the form of organized subatomic particles roaming the universe via a yet-to-be-designed technology, and still believe it’s human.

Yet, our memories seem the same. Our job is the same. Where we live is the same. In fact, for all practical purposes, everything is the same in the eye of the beholder. And the same goes for a digital copy of oneself. If we aren’t able to intellectually explain our own consciousness, then how can we deny the consciousness of our digital copy? We have what we think and hope is free will, but it’s limited to the capacity of the three pounds of meat we carry on our shoulders, which is infinitesimally small in a universe spanning many trillions of light years.

Mr or Ms. Right, who knows?

pexels-photo-792777In seeking a true life partner, it’s worth considering the equation for yourself. Should you marry a smart person? Generally speaking, intelligence is considered good – but here is where things get more complicated. If there is a big gap between the IQ of the two partners, their suitability for each other will be low because, in this particular realm, the trait, though nonrelational, is significant to relationship success.

The same goes for wealth. On the nonrelational scale, a lot of money is often good, but a wealthy person might score low on fidelity (fat bank accounts open many romantic doors). Moreover, wealthy people tend to believe that they are more deserving, and hence their caring behaviour might be lower. In the same vein, having a good sexual appetite is usually good, but a large discrepancy between the partners’ sexual needs is not conducive to that crucial romantic connection. If, for instance, a man wants to have sex once or twice a week and a woman wishes to have sex multiple times a day, would they be suitable partners? Clearly not. And even if all these nonrelational factors match up, partners still won’t bring out the best in each other unless they truly connect.

For many people, the quest for the perfect person based on qualities such as beauty, intelligence and wealth (instead of the perfect partner, who offers connection and flourishing) is a major obstacle to finding The One. Since life is dynamic and people change their attitudes, priorities and wishes over time, achieving such romantic compatibility is not a onetime accomplishment, but an ongoing process of mutual interactions. In a crucial and perhaps little-understood switch, perfect compatibility is not necessarily a precondition for love; it is love and time that often create a couple’s compatibility.

Can a person cognisant of the two scales use this knowledge to aid the quest? There’s a calculus, it turns out. We all know the drill. You compile a checklist of the perfect partner’s desirable and undesirable traits, and tick off each trait that your prospective partner has. This search approach is pretty much how online dating works: it focuses on negative, superficial qualities, and tries to quickly filter out unsuitable candidates. Eliminating bad options is natural in an environment of abundant romantic options.

But the checklist practice is flawed because it typically lacks any intrinsic hierarchy weighting the different traits. For instance, it fails to put kindness ahead of humour, or intelligence before wealth. And it focuses on the other person’s qualities in isolation, scarcely giving any weight to the connection between the individuals; in short, it fails to consider the value of the other person as a suitable partner.

The Empty Brain.

 

person holding string lights photo
Photo by David Cassolato on Pexels.com

No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain – or copies of words, pictures, grammatical rules or any other kinds of environmental stimuli. The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’.

Our shoddy thinking about the brain has deep historical roots, but the invention of computers in the 1940s got us especially confused. For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer.

To see how vacuous this idea is, consider the brains of babies. Thanks to evolution, human neonates, like the newborns of all other mammalian species, enter the world prepared to interact with it effectively. A baby’s vision is blurry, but it pays special attention to faces, and is quickly able to identify its mother’s. It prefers the sound of voices to non-speech sounds, and can distinguish one basic speech sound from another. We are, without doubt, built to make social connections.

A healthy newborn is also equipped with more than a dozen reflexes – ready-made reactions to certain stimuli that are important for its survival. It turns its head in the direction of something that brushes its cheek and then sucks whatever enters its mouth. It holds its breath when submerged in water. It grasps things placed in its hands so strongly it can nearly support its own weight. Perhaps most important, newborns come equipped with powerful learning mechanisms that allow them to change rapidly so they can interact increasingly effectively with their world, even if that world is unlike the one their distant ancestors faced.

Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving.

But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.

We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.

Computers, quite literally, process information – numbers, letters, words, formulas, images. The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’). On my computer, each byte contains 8 bits, and a certain pattern of those bits stands for the letter d, another for the letter o, and another for the letter g. Side by side, those three bytes form the word dog. One single image – say, the photograph of my cat Henry on my desktop – is represented by a very specific pattern of a million of these bytes (‘one megabyte’), surrounded by some special characters that tell the computer to expect an image, not a word.

Computers, quite literally, move these patterns from place to place in different physical storage areas etched into electronic components. Sometimes they also copy the patterns, and sometimes they transform them in various ways – say, when we are correcting errors in a manuscript or when we are touching up a photograph. The rules computers follow for moving, copying and operating on these arrays of data are also stored inside the computer. Together, a set of rules is called a ‘program’ or an ‘algorithm’. A group of algorithms that work together to help us do something (like buy stocks or find a date online) is called an ‘application’ – what most people now call an ‘app’.

Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.

Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers?

Important Trends in Education Today.

gettyimages-165806703-170x170404803639.jpgWhat Are Some Key Trends in Learning?

Education has looked almost the same for over a century. Learners are still seated today as they were 300 years ago. To many, that these systems has become outdated. In efforts across the world, innovators work to improve and possibly transform education for images (53)-637830768..jpgall by incorporating new ways of learning and exponential technologies into education. These learning experiments can become successful and become a learning trend affecting millions.

Learning in small bursts

Teachers have found it impactful to incorporate short bursts of learning on different topics throughout the day. Bite-sized learning is delivered in brief interactive sessions, ensuring the learner’s attention is not lost. This micro-learning format can reduce the need to learn big sets of information at once, while lessening the stress which tends to accompany that type of effort.

Adding creativity back to the classroom

Classrooms focused on STEM (Science, Technology, Engineering, and Mathematics) have worked well over the past decades, but creative people are needed now more than ever. Educators have found that creativity is vital to our learning process and the way we express ourselves as humans. Learning about the creative arts is gaining popularity, and so now is back in some curricula. Even more, many companies have begun to give credit to creativity, even going so far as to list it as a desired trait in their recruiting efforts.

Changing teaching methods

Leaders in education have started to reach children through new ways of teaching, such as simulated learning, educational games, and even edutainment (educational entertainment). Simulated learning experiences put the student inside the learning. For example, RoomQuake scales a regular classroom into an earthquake simulation. Educational games and edutainment learning methods offer students the chance to get out of the book and into an experience, opening up opportunities for valuable teaching moments. Additionally, this approach gives learners ample opportunity to develop real-world skills, such as working collaboratively and essential time and task management skills.

Personalization

Some education experts predict that classrooms of the past century will soon go by the wayside, in favor of more individualized learning. With the freedom that comes from not being locked down by device, location, or time of day, people can learn at their own pace. Self-paced learning not only gives fast learners an edge but can also highlight learning challenges more quickly. Programs and courses that cater to specific learning challenges like ADHD and Dyslexia are providing learners with the tools they need to learn on par with others in their age range.

Online classes

Although online education has gone through some transformation, it has been available via the internet for decades. Online classes are easily accessible for any person with internet access, whether that be through computers, tablets, phones, or smart devices. This opens up opportunities for learning on internet platforms that teach people using video tutorials and visual learning methods.

Emerging technologies

From a bird’s-eye view, emerging technologies in education can appear volatile, but this exploration and discovery can close important gaps in access to and quality of education. Leading educators have taken the first steps toward teaching with the help of Artificial Intelligence  (AI) and blockchain in order to guide students’ educational journeys. Other educational innovators have focused on making education more engaging and appealing through virtual reality (VR) and augmented reality (AR). And some leaders are exploring the use of the Internet of Things (IoT) to offer more streamlined communications to students and parents in real-time.

The Future of Learning.

woman-1852907__480555129274.jpgWe’ve had the same industrialized education system for decades, and it’s clear that challenges have continued to become more extreme with each new generation of children. There are severe limitations that anchor the potential of our education system -like grading, subject material, and cognitive and developmental restrictions. The way we deliver education is often demotivating and can set children up for failure from the start. Let’s look at some key challenges facing our current curriculum .

Grading systems are failing to motivate students

In the traditional education system, students’ grades begin at the highest level and are marked down for any mistakes made. At best, it’s demotivating. At worst, it has nothing to do with the way the world works for adults. Maybe we need to take a lesson from the world of gaming, where you start at zero and earn points for everything you do successfully.

Lecturing models don’t always meet students’ needs

Most classrooms have a teacher up in front of a room lecturing to students, which can create confusion and boredom for many students. The one-teacher-fits-all model comes from an era of scarcity, when great teachers and schools were rare. Class size and student-teacher ratios further complicate this old-schoolapproach (pun intended!).

Educational content lacks relevance for today’s world

How much of what children learn in elementary and high school is useful to them later? Current subject matter often leaves much to be desired when it comes to critical life lessons that any child must experience to successfully navigate adulthood. Students need to develop an aptitude for critical thinking and strong communication and collaboration skills, and today’s content (and the practice of teaching to tests) often falls short.

Schools value conformity over imagination

Industrialized educational programs have so much structure, rote memorization, and analytical learning that the opportunity for creativity is squashed before it can fully develop. Where do we foster imagination?

Boredom is causing alarming dropout rates

If learning in school is a chore, boring, or emotionless, then students will tend to be disengaged. Every day, an average of 7,200 students drop out of high school, adding up to 1.3 million each year. This means only 69 percent of students who start high school will finish four years later. And more than 50 percent of these high school dropouts cite boredom as the main reason they left.