Category: Current Issue
Vertical farms could provide the food of the future.
According to a growing body of evidence, the human species may be in crisis. Earth is becoming overpopulated, polluted, and drained of resources at an alarming rate. The basic issue is simply about numbers: there are a whole lot of us, and not much land. The planet is currently populated by 7 billion human beings, and projected to rise to 9.5 billion people by 2050.1 In order to feed the growing population, we would need an area of additional farmland approximately the size of Brazil.2 Yet we cannot create much more arable land than we have. The productivity of plant life may already be at maximum capacity even as we try to increase crop yield year after year.3 We need to work with what we have already taken to meet our needs.
Unfortunately, we are not using our resources wisely. Commercial agricultural practices typically have high-energy costs due to irrigation methods, fertilizer, and fuel usage. As the cost of fuel increases, so does the market cost of food. While products like corn ethanol reduce fuel costs in the short term, this means more of our limited crop supply is not available for consumption and may further increase the cost of food. The current system also forces us all to depend on a select few countries to produce enough to feed the whole world, causing high shipping costs, and one poor growing season can place dire limitations on all of us.4
Furthermore, current irrigation methods waste more water than any other human activity. Between 70% and 90% of the world’s freshwater supply (which is a mere 3% of all water to begin with) is used for irrigation of farmland and then rendered unsanitary for human use due to pesticides.2 On top of these issues, modern agricultural practices cause high outputs of pollutants in our air and water and lead to food-borne illnesses due to unsanitary animal overcrowding. Our only hope is to change the way we feed ourselves, to reduce waste and to maximize efficiency without increasing our consumption of materials. What we need is another agricultural revolution.
Solving a Growing Problem
The vertical farm is a potential solution for these global issues. Ideally, a vertical farm would be a large, independently operating structure centrally located in a major city. It would feature two multistory, skyscraper-like buildings working together- one to manage food production with nutrient film techniques, and another to manage waste through living machines and generate energy with photovoltaic cells and carbon sequestering. Popularized in recent years by Dr. Dickson Despommier, a professor at Columbia University, the concept originated in the 1950’s with a “Glass House” and has been further developed by several innovators over the years.5 Controlled Environment Agriculture (CEA), which allows for control of temperature, pH, and nutrients, has also been employed in for many years in commercial greenhouses in order to produce crops unsuited for the local climate. Although these greenhouses are often high-yielding, they typically require fossil fuels that produce considerable emissions, and do not eliminate agricultural runoff. In contrast, the vision of the vertical farm is one of grand scale: sky-scraping, glass-paneled buildings placed in every major urban center to provide affordable, carbon neutral, pollutant-free food to the cities’ residents. A project of this scale involves a huge number of factors, all dependent on the ratio of cost to potential yield. This could completely change the way we get food from the ground to the table. Instead of shipping produce from several states away, or from outside of the country, grocery stores could stock fruits and vegetables grown right in the heart of their city. It would reduce pollution, increase production, and be healthier for ourselves and the planet.
Designing the Future Farm
Constructing vertical farms within major cities may eliminate the problems of land shortage, pollution, deforestation, water shortage, and unsanitary practices commonly found in the agriculture industry around the world. Office spaces could be located nearby for the business and management end of the operation. This design would have high initial costs, but over time would recoup these losses and become highly profitable while benefiting urban centers and the environment. Savings from reduced energy and maintenance costs would over time compensate for initial losses, and sales profits are projected to be comparable to stock market averages.2 Despite high costs for technology and construction at the upstart, vertical farms could have astounding effects on local and global populations. The cleanliness and convenience of an environmental friendly food center would improve the property value of surrounding urban neighborhoods and improve quality of life. It would be an economic boon to cities and generate a wide range of new urban jobs, but would cause employment and sales losses for rural farmers. Global effects could be even more important. This design would be especially effective in tropical and subtropical locations, where incoming solar radiation is at a maximum and controlled climates are easiest to maintain. If implemented in less developed nations in these locations, vertical farms could transform those economies and be a catalyst to slow excessive population growth as urban agriculture is adopted as a strategy for sustainable food production. It might also reduce or eliminate the occurrence of armed conflict over natural resources, such as water and land, as both would be more available thanks to successful conservation.
Putting the Design to Work
Vertical farming proposes to be the ultimate design for sustainability and conservation of resources. A controlled climate allows for high yield, year-round crop production. Consider strawberries as an example: 1 acre of berries grown indoors produces as much fruit in one year as 30 outdoor acres.2 Generally, growing indoor crops is four to six times more productive than outdoor farming. This method also protects plants against inclement weather, parasites, and disease, so fewer crops are lost and toxic chemicals are not needed for pesticides. Using special dirt-free hydroponic systems and re-circulating ‘living machines’, we can even recycle city waste water and turn it into clean water for irrigation.6 This method of water use would drastically reduce consumption, eliminate most pollutants found in runoff, and lead to cleaner rivers, lakes and oceans. Additionally, a vertical farm could add energy “back to the grid”, rather than consuming nonrenewable fossil fuels. Tractors, plows and shipping trucks, all “gas guzzlers”, would be unneeded in a vertical farm. A combination of solar panels and sequestered methane generated from the composting of non-edible organic materials could generate the heat necessary for a controlled climate.7 Essentially, the building could run on sunshine and garbage. Waste from other parts of the city could even be reduced if we were able to incorporate it for methane generation in our farm. Vertical farming might be our answer for low waste, high yield farming of the future.
However, we likely won’t see this solution implemented any time soon. While technologically possible today, urban vertical farms are unlikely to find a place in society in the near future because of the finances required to begin this endeavor. Hope lies with universities and private institutions to expand the idea with further research and with far-sighted investors to provide the funds to implement this incredible solution. Perhaps with more information, a corporation could be convinced to finance the first commercial vertical farm.
1 United Nations, Department of Economic and Social Affairs, Population Division (2011): World Population Prospects: The 2010 Revision. New York.
2 Despommier, Dickson D. The Vertical Farm: Feeding Ourselves and the World in the 21st Century. New York: Thomas Dunne, 2010. Print.
3 Li, Sophia. “Has Plant Life Reached Its Limits?” New York Times. New York Times, 20 Sept. 2012. Web. 15 Oct. 2012. .
4 Lowrey, Annie. “Experts Issue a Warning as Food Prices Shoot Up.” The New York Times. The New York Times, 4 Sept. 2012. Web. 15 Oct. 2012. .
5 Hix, John. 1974. The glass house. Cambridge, Mass: MIT Press.
6 Ives-Halperin, John and Kangas, Patrick C. 2000. 7th International Conference on Wetland Systems for Water Pollution Control. International Water Association, Orlando, FL. pp. 547-555. http://www.enst.umd.edu/files/secondnature.pdf
7 Concordia University (2011, October 4). From compost to sustainable fuels: Heat-loving fungi sequenced. ScienceDaily. Retrieved October 18, 2012, from http://www.sciencedaily.com/releases/2011/10/111003132441.htm.
A closer look at which earphones are better for listeners.
How many times have you been in the library, on the T, or at the gym and received angry looks from the people closest to you because they can hear the music through your headphones? While listening to your favorite playlist may be the most enjoyable way to pass time, it not fun for your ears or those around you.
Our ears and the auditory system are delicate structures. The ear is set up in three parts: the outer ear, the middle ear, and the inner ear. The outer ear is the visible lobe into which we place our ear buds. The middle ear is the ear canal, an air-filled pathway that transports sound to the inner ear. The inner ear has two components, one of which has an air medium, and the other, a fluid medium. The goal of the inner ear is to transduce the sound waves into physical fluid waves in order to generate the sensation of sound. Music travels through the ear canal as sound waves and is then transformed into physical waves once it enters the fluid-filled inner ear.1 The waves of fluid move the sensory receptors of the ears, which are known as hair cells. Hair cells are cylindrical cells that sway and bend in sync with the movement of the fluid in the inner ear; their position determines whether or not a signal is sent to the brain to alert it to sound.1 Like the hairs on your head, hair cells are easily broken. If the waves are too strong or intense, (like from very loud music), the hair cells can be permanently damaged. Unlike the hairs on your head, they do not grow back.1 As the number of hair cells decrease, so does your ability to hear.
It is important to protect our auditory systems. A high functioning auditory system allows us to truly appreciate every note of a song, hear a friend calling our name down the street, or hear a car honk so we know not to cross the street.
There are many ways we can protect our ears, and one of the easiest and most practical ways is picking the right pair of headphones. There is much debate about which type is best: – “in-ear” headphones, such as iPod ear buds, or “on-ear” headphones, such as Beatz by Dr. Dre.2 Currently, many suggest that “on-ear” headphones are better for your hearing because they allow for the passage of more air, but a definitive winner has yet to be chosen. While some may be far beyond the price range of a college student, such as the Boise Noise Cancelling headphones, there are many affordable options.3
- AKG K 311 Powerful Bass Performance, 19.95 – affordable and uses a semi –open design that allows air flow to add comfort and prevent damage by allowing for more external environmental sounds to also be let in.3
- Logitech UE 350, 59.99 – automatically dampens sound without losing quality.4
- Beyer Dynamic DTX300p, 64.00 – an on-ear design that keeps the sound from building up too much in the ear canal.3
However, no matter the headphones you have, it is important to remember to keep the volume down. This is for both the sake of your ear and to make sure the lady sitting across from you does not start screaming, which would also be bad for your hearing.
12012. Auditory System. Science Daily, Retrieved from http://www.sciencedaily.com/articles/a/auditory_system.htmGrobart, Sam (2011, 12 21).
2Better Ways to Wire Your Ears for Music. The New York Times, Retrieved from http://www.nytimes.com/2011/12/22/technology/personaltech/do-some-research-to-improve-the-music-to-your-ears.html?pagewanted=all&_r=0
32012, 01 20. Which Headphones Should I Use? Deafness Research UK, Retrieved from http://www.deafnessresearch.org.uk/content/your-hearing/looking-after-your-hearing/which-headphones-should-i-use/
4Shortsleeve, Cassie (2012, 12 10). The Coolest New Headphones. Men’s Health, Retrieved from http://news.menshealth.com/best-headphones/2012/09/10/
Sick children are cared for by parents, but what happens when they grow up?
Many children in the United States live with chronic disease; Type 1 Diabetes, sickle cell, arthritis, asthma, and cystic fibrosis are common diagnoses. Yet once children grow into adulthood and age out of the pediatric healthcare system, they often find themselves unprepared to advocate for their own medical needs. In the pediatric healthcare system, the child’s perspective is always important although ultimately the child’s guardian is legally responsible (e.g. signing informed consent for a procedure). Once the child reaches the age of 18 (age of majority in most states), the child retains full legal responsibility for choosing and consenting to treatments. With the sudden diminishment of the caregiver’s legal authority, the responsibility becomes the child’s to make the best decisions for his or her own body and lifestyle.
Without a gradual introduction to the realm of consent, a child cannot possibly be expected to understand the complexities of being a medical advocate. Effective transition is different from efficient transfer, where transition is the long-term accumulation of a child’s medical responsibility and transfer is simply the physical move to an adult care facility.1 Merely sending a child and his medical record to a new adult provider is not synonymous with actually preparing the child to care for his chronic illness into adulthood. Ultimately, the latter will be most advantageous to children, families, physicians, and the medical system at large.
“Patient activation” as the mechanism for achieving meaningful and long-lasting involvement in pediatric chronic illness is a much-debated topic. Patient activation is defined by the patients’ completion of four stages: 1) understanding that their role is paramount 2) having the appropriate understanding and certainty to make a decision 3) having taken actual action towards health goals and 4) persisting in the event of adverse situations.2 It has long been known that actively engaged patients report improved health outcomes) and that successful chronic illness management skills can increase function, as well as minimize pain and healthcare costs. 3,4 The benefit to pediatric patients would be tremendous – particularly to their long-term physical, social, and emotional functioning. The problem remains: even with patient activation conceptualized, how can pediatric patients be ‘activated’?
The solution is a multifaceted approach, targeting the child, family, medical team, and medical culture at large. If medicine can be increasingly viewed as “consumer driven,” individuals may be less likely to accept doctors’ opinions without ensuring their own voice had been heard.2 For example, few people would tolerate going to a restaurant and being told what to order. Yet medical decisions are often chosen with the patient’s complete understanding.1
Medical education must be progressive and developmentally appropriate for patients and their families; too much too fast would only serve to be overwhelming. Nevertheless, patients and families must be empowered. This inevitably depends on the disease, age of child, culture considerations, and more. For example, a child living with arthritis needs an emphasis on physical activity as he matures. On the other hand, a child living with HIV should be educated about safe sex practices only when it is developmentally and culturally appropriate, involving the family along the way. Additional concerns, such as procuring health insurance with a pre-existing condition and medication prescription, are important to discuss with the child and family as transition begins to further success.
How can this success be measured? The Patient Activation Measure (PAM) was developed by Dr. Judith Hibbard, a professor of Health Policy at the University of Oregon, to assess patients’ degrees of involvement in their care. There are four subscales, which include ‘Believes Active Role is Important’ (e.g. “When all is said and done, I am responsible for managing my health condition”), ‘Confidence and Knowledge to Take Action’ (e.g. “I understand the nature and causes of my health condition(s)”), ‘Taking Action’ (e.g. “I am able to handle symptoms of my health condition on my own at home”), and ‘Stay the Course under Stress’ (e.g. “I am confident I can keep my health problems from interfering with the things I want to do”). 2 It is evident that these four core values of patient activation underscore the intersection of belief that the patient is an important actor in their medical care, thorough education of disease and lifestyle, and an active role. In a study with an adult population, the PAM was significantly related to health-related outcomes. Patients who score as being highly activated also experienced improved health-related outcomes when compared with their counterparts.5
The PAM is a useful tool for measuring patient activation, but the problem persists – how do we meaningfully activate pediatric patients? The answer in part lies in transitions clinics, an innovation that patients have been benefiting from for several years. Transition clinics occur within a given specialty (e.g. Rheumatology) with the goal of providing additional resources to prepare a child for adult care: primarily educational programs and skills training.6 The age at which a child becomes engaged in the transition clinic will varies in light of the age at diagnosis, social support, disease severity, and developmental stage. For instance, a child diagnosed with an illness at age fifteen should not begin the transition clinic right away, while another child diagnosed with the same illness at age eight may find the transition clinic appropriate at age fifteen.
Ultimately, the issue in part returns to the present medical culture: how doctors are trained to communicate with patients and the historically paternalistic medical model. It is plausible patients with doctors that have a strong and respect-filled relationship will have more positive medical experiences compared to those with poor patient/physician relationships. Thus, strengthening this relationship should be tandem to encouraging and initiating a transition process for pediatric patients.
1Peter et.al, 2009. Transition From Pediatric to Adult Care: Internists’ Perspectives. Pediatrics. 123(2):417-423.
2Hibbard et al., 2004. Development of the Patient Activation Measure (PAM): Conceptualizing and Measuring Activation in Patients and Consumers. Health Serv Res. 39(4 Pt 1): 1005–1026.
3 Von Korff et al., 1997. Collaborative Management of Chronic Illness. Annals of Internal Medicine.127(12):1097–102.
4 Glasglow et al., 2002. Self-management Aspects of the Improving Chronic Illness Care Breakthrough Series: Implementation with Diabetes and Heart Failure Teams. Annals of Behavioral Medicine.24(2):80–7.
5J. Greene and J. H. Hibbard. 2011. Why Does Patient Activation Matter? An Examination of the Relationships Between Patient Activation and Health-Related Outcomes. Journal of General Internal Medicine.
6 Crowley, et al. 2011. Improving the transition between paediatric and adult healthcare: a systematic review. ADC. 1:1-6.
Jennie David (CAS 2013) is a psychology major from Nova Scotia. She hopes to use her personal experience with Crohn’s Disease to support and inspire chronically ill children as a future pediatric psychologist. Jennie can be reached at email@example.com.
Tagged as: medicine, patient care, psychology, pediatrics
Many of the objects surrounding Earth are just man-made waste.
At this very moment, over several million useless pieces of “space junk” orbit our planet, presenting an alarming concern for not only the Unites States, but nearly every country on Earth. Technology is innately tied to space; with the growth of cell phones, GPS systems, and satellite television and radio, consumers understand their modern life requires functioning satellites. However, nonfunctioning satellites and their fragments, which have been left orbiting our planet for decades, pose a grave risk to the space technology that supports our current way of life. Primarily, we face three challenges that must be dealt with. First, we must take measures to reduce debris left in space from now on. Second, we should ensure we do not compromise our national security interests in space. Finally, debris left by past space missions must be removed in order to prevent collisions with current and future space endeavors.
A Call to Action
Each of these challenges requires a combination of technical ingenuity, international cooperation, and financial commitment from multiple nations. The alternative is failure, which is grave for everyone. Every new piece of debris in space increases the potential of “Kessler Syndrome,” which arises when collisions between fragments of space debris and larger satellites create exponentially higher amounts of space debris. Kessler Syndrome could leave our satellite networks and space exploration programs useless for decades. For example, China’s destruction of their own nonfunctioning satellite in 2007 yielded more than 3,000 pieces of shrapnel, which have since endangered both the International Space Station and the space shuttle Atlantis. 1,2 This single action effectively undid nearly all the progress made to reduce space debris for the last few decades.
Following China’s actions in 2007, the international community has made progress in defining some standards on space debris, although there is still substantial room for improvement. The Orbital Debris Program Office within NASA has worked on guidelines to reduce space debris. NASA and the Department of Defense have also been working together to track some of the 500 million pieces of space debris orbiting Earth.3 These efforts will be expanded as part of NASA’s 2013 budget proposal to shift funding toward “orbital debris and counterfeit parts tracking and reporting programs;” the Department of Defense is beginning contract bids with aerospace and software companies (like Lockheed Martin and Raytheon) over new “Space Fence” software to track orbiting particles.4,5
Satellites also must be able to switch from the popular Low Earth Orbit (LEO) zone to a “graveyard” orbit when they are no longer operational. This requirement helps prevent older satellites from becoming inadvertent hazards; which is precisely what happened in 2009 when a defunct Russian satellite collided with a US communications satellite over Siberia.6 But while US measures to prevent debris have helped stop US satellites from becoming destructive, about 2/3 of the debris in space comes from non-US launches. Therefore, a domestic approach to this issue is not enough; we must work on international agreements to reduce space debris.
Some efforts have already been made in this field. The Inter-Agency Space Debris Coordination Committee (IADC) and the United Nations Committee on the Peaceful Uses of Outer Space (COPUOS) have given a forum for many of the great space-faring nations (Including the US, Russia, and China) to discuss the issues surrounding space debris. Now, the European Union has published an International Code of Conduct for Outer Space Activities. This code serves as a framework for multilateral negotiations to begin. However, the United States has some concerns with potential security risks.
Many of the efforts within the code relate to preventing incidents like the 2007 Chinese satellite destruction. In doing so, the code also prevents states from pursuing militarization of space, which has led to domestic concerns in the US. the intentional destruction of enemy satellites could jumpstart Kessler Syndrome and create a cloud of debris around the Earth. Yet concerns have been repeatedly raised against the international code. Many fear the code is “disarmament of space” in disguise.
Concerns for the Future
The United States Secretary of State has responded to these claims by emphasizing how the proposed agreement would not be legally binding. Since the agreement would not have the rule of law, it would technically not be a treaty. Therefore, Senate approval is not necessary. Regardless, it is clear any international agreements would surely need to tread lightly on the topic of national security. After all, if the United States is having security concerns with the wording of the agreement, we can surely expect other military powers, like China, to raise objections. However, the code’s provisions are consistent with all existing practices of the National Aeronautics and Space Administration, Pentagon, and State Department.6
However, such international agreements, aimed at reducing space debris and allowing each nation the right of protection in space are still controversial and will take time to reach an agreement. While the idea of removing debris is an undeniably positive ambition, the primary opponent to this movement is monetary. None of the existing ideas for removing space junk is cost effective, but new ideas lend hope for a more cost friendly solution. Ideas like electro-dynamic tethers, which pull satellites into their inevitable destruction in Earth’s atmosphere, and nanosatellites, which throw nets over debris and pull them into the atmosphere, have the potential to reduce the amount of debris in our atmosphere while still being cost effective, but they are far from deployment.7
Allowing NASA to have the resources required to battle this epidemic is a key way to advance these ideas. The Obama administration has highlighted the importance of these measures in their national space policy.8 NASA has specifically created a budget for researching and tracking space debris, despite having their overall budget reduced from 4.4% of the total federal budget (in 1966) to less than 0.5%.9 By allocating funding to this specific field, we can begin taking steps to prevent space debris from destroying our modern way of life.
An Optimistic Outlook
Space debris is a problem that has only grown worse over time, with every collision in space creating hundreds of thousands of new potential disasters. Through international cooperation, domestic understanding, and technological investment we can solve this problem. We can begin by highlighting the importance of this issue and not allowing it to escape the public eye. Our dependence on space is something we often take for granted, but it hangs precariously on the edge of disaster if we do not answer this challenge. However, we will choose to answer the challenge, not because it is easy, but because it is hard.
1 Malik, T. (2012, 01 29). Iss dodges space debris from Chinese satellite. Huffington Post. Retrieved from http://www.huffingtonpost.com/2012/01/30/iss-dodges-debris-from-de_n_1241167.html
2 Schwartz, E. I. (2010, 05 24). The looming space junk crisis: It’s time to take out the trash. WIRED Magazine, Retrieved from http://www.wired.com/magazine/2010/05/ff_space_junk/all/1
3 NASA Orbital Debris Program Office. (2012, 03). orbital debris frequently asked questions. Retrieved from http://orbitaldebris.jsc.nasa.gov/faqs.html
4 National Aeronautics and Space Administration, (2012).Fy2013 president’s budget request summary. Retrieved from website: http://www.nasa.gov/pdf/659660main_NASA_FY13_Budget_Estimates-508-rev.pdf
5 Staff. (2012, 03 13). New debris-tracking ‘space fence’ passes key test. CBS News. Retrieved from http://www.cbsnews.com/8301-205_162-57396451/new-debris-tracking-space-fence-passes-key-test/
6 Broad, W. J. (2009, 02 11). Debris spews into space after satellites collide. The New York Times. Retrieved from http://www.nytimes.com/2009/02/12/science/space/12satellite.html
7 Zenko, M. (2011). A code of conduct for outer space – policy innovation memorandum no. 10. Council on Foreign Relations, Retrieved from http://www.cfr.org/space/code-conduct-outer-space/p26556
8Office of the President of the United States, (2010). National space policy of the united states of America. Retrieved from website: http://www.whitehouse.gov/sites/default/files/national_space_policy_6-28-10.pdf
9 Rogers, S. (2010, 02 01). NASA budgets: Us spending on space travel since 1958 updated. The Guardian. Retrieved from http://www.guardian.co.uk/news/datablog/2010/feb/01/nasa-budgets-us-spending-space-travel
Death and destruction in the developing world – are plants to blame?
The media often mentions atrocities committed in developing countries. Most of these crimes are attributed to vicious dictators, nasty civil wars, poor infrastructure, and ignorance of the people. However, one of the main culprits, one guilty of causing countless cases of malnutrition, starvation, and poverty, is not human, but plant. Plants are generally thought of as passive organisms that create an ambient and stable environment. However, many people do not know that plants can be parasitic. These parasitic plants generally target other plants from which they absorb nutrients necessary for growth and survival. One of the most widespread and deadly parasites is the Striga genus of plant parasites of whichStriga asiatica causes some of the greatest damage. Striga asiatica is native to Sub-Saharan Africa and Asia, places where malnutrition, starvation, and poverty are rampant and most farmers are subsistence farmers.
Striga asiatica is a beautiful flowering plant, but its real nature is truly deadly for many farmers. In fact its harm to farmers can be seen through the name given to it by American farmers after it was briefly introduced to the United States – witchweed. Its seeds are very small and almost indiscernible by the naked human eye. Not only is it almost impossible to see, but it can live dormant in the earth for up to 20 years while waiting for a signal from the host in order to initiate germination. Striga asiatica growth is initiated by chemical signals from hosts, which are usually types of grasses such as corn and sorghum. Once the plant has parasitized it host, host growth is severely stunted. Striga asiatica is known to reduce crop yields by up to 90%, and also releases thousands of hardy seeds to cause the cycle again.1
Current Prevention Techniques
One of the major policies that has been undertaken to stop the agricultural yield losses due to this parasite is the attempt to establish better farming practices. One way to do this is “intercropping,” wherein Desmodium, a type of legume, is grown along with Striga asiatica. The properties of Desmodium allow it to suppress Striga asiatica growth while allowing crops to grow.2 Another approach has been to breed a Striga asiatica resistant crop. This research has focused mainly on Zea mays (corn) and some very promising finds have been made. In fact a strain of Zea mays known as TMV-1 resulted in only a 13.3% yield loss.3 Obtaining the ability to use specialized corn, however, is often expensive and an unlikely alternative for subsistence farmers in many of the afflicted regions.
Other approaches have focused on herbicides. A promising and inexpensive type of herbicide known as acetolactate synthase (ALS)-inhibiting herbicides has been shown to suppress Striga asiatica growth by 75-95%.4 However, these approaches have been difficult to implement and do not completely eradicate the problem posed by the parasite. Therefore, the most promising research that is being conducted in regards to Striga asiatica focuses on the plant-plant small molecule signaling that occurs between Striga asiatica and its host, which can theoretically provide a permanent solution.
Investigation and Future Plants
This plant-plant small molecule signaling model is known as semagenesis and has a wide variety of potential impacts. Research into communication between Striga asiatica and its host has led to the identification of numerous germination stimulants. These stimulants are mainly quinones, which are common organic cyclic structures that act as oxidizing agents, such as 2,6-dimethoxy-1,4-benzoquinone (DMBQ) and cyclopropyl-p-benzoquinone (CPBQ). By identifying these stimulants, scientists have been able to develop synthetic germination stimulants similar to these quinones. Since DMBQ analogs have been able to induce germination without the presence or organic products of a host, this cheap germination stimulant could be used to preemptively induce growth of the parasite.5 Without the presence of the host plants, the parasites can only live about a week. After this period, crops can be planted without suffering a reduced yield.
Further research has found other molecules that have a large effect on Striga asiatica growth, and even plant growth in general.6 These molecules include monolignols, which are the precursors to lignin, the compound that makes up the secondary cell wall of plants. Monolignols are also the oxidative precursors to germination stimulating benzoquinones and other highly reactive oxygen species. These highly reactive compounds are known for causing symptoms of aging and are the reason why food companies inundate consumers with antioxidant labels. This study provides an intriguing preview into the evolution of plants because it shows that all plants, not just parasitic plants, have pathways relevant to parasitism.
Even more recent experiments have focused on using RNAi, or RNA interference, which allows regulation of gene activity. Specifically, genes can be downregulated or knocked out in order to identify the plant parasitism mechanisms and the genes responsible for the parasite’s growth. The greatest potential use for this research is for the creation of anti-parasitism measures.
Plants have already been used to create many drugs and medical therapies. Continued research of parasitic plants using RNAi could give new insights into human metabolic pathways, genetic mutations, and countless other mechanisms. Most importantly, we may be able to develop methods of manipulating plant parasitism in developing countries, which would benefit the most from a boost in agricultural production.
The next time you hear about the food crises occurring in Africa and Asia, remember that the people who solve many of the issues plaguing these regions are not only politicians and social workers, but even scientists.
1Cochrane, V., & Press, M. C. (1997, May). Geographical Distribution and Aspects of the Ecology of the Hemiparasitic Angiosperm Striga asiatica (L.) Kuntze: A Herbarium Study. Journal of Tropical Ecology, 13, 371-380.
2Khan, Z. R., Pickett, J. A., Wadhams, L. J., Hassanali, A., & Midega, C. A. (2006, September). Combined control of Striga hermonthica and stemborers by maize-Desmodium spp. intercrops. Crop Protection, 25, 989-995.
3Mbwaga, A. M., & Massawe, C. (2001, February). Evaluation of maize cultivars for striga resistance in the Eastern Zone of Tanzania. Seventh Eastern and Southern Africa Regional Maize Conference, 174-178.
4Abayo, G. A., English, T., Eplee, R. E., Kanampiu, F. K., Ransom, J. K., & Gressel, J. (1998, July). Control of parasitic witchweeds (Striga spp.) on corn (Zea mays) resistant to acetolactate synthase inhibitors. Weed Science, 46, 459-466.
5Palmer, A. G., Gao, R., Maresh, J., Erbil, W. K., & Lynn, D. G. (2004). Chemical biology of multi-host/pathogen interactions: chemical perception and metabolic complementation. Annual Review of Phytopathology, 42, 439-464.
6Palmer, A. G., Chen, M. C., Kinger, N. K., & Lynn, D. G. (2009, May). Parasitic angiosperms, semagenesis and general strategies for plant-plant signaling in the rhizosphere. Pest Management Science, 65, 512-519.
7Nickrent, D.L. (Photographer). Scrophulariaceae [Photograph], Retrieved April 3, 2009, from: http://www.life.uiuc.edu/plantbio/digitalflowers/Scrophulariaceae/27.htm.
Promoting resilience in pediatric autoimmune diseases.
Half of all Americans will deal with a chronic illness in their lifetime.1 This overwhelming statistic is curiously a little-known fact, with chronic illnesses rarely discussed and poorly understood by those who do not suffer from disease. Twenty percent of the American population suffers from an autoimmune chronic illness, including Crohn’s disease, arthritis, asthma, and lupus.
Autoimmune diseases generally begin in early adolescence and develop a cyclic pattern of active disease, or ‘flares’, and remission. In these types of conditions, the immune system is stimulated unnecessarily by an unknown cause and perceives an individual’s organ or organ system as a pathogen, attaching and destroying normal tissue as if it were a disease-causing bacteria or virus.
Since many of these disorders begin in youth, both physiological and psychological treatments are critical to ensuring overall health. Most autoimmune diseases are invisible from the exterior; it is nearly impossible to discern if a person is ill from their physical appearance alone. Therefore, people with autoimmune illnesses often fight to legitimize their diseases, whether it is to a doctor, professor, or a peer who cannot ‘see’ the problem. Although autoimmune diseases are typically not terminal, the severity and uncertainty associated with their prognosis often goes unrecognized due to the lack of physicality associated with the disease and the severe limitation of awareness surrounding such illnesses.
Children are a fascinating subset of this population, for patients with pediatric autoimmune diseases prove to be both the center of the care and somehow also the individual with the least amount of control. Yet, once the child transitions to adulthood, they are expected to advocate for themselves despite the fact that they have not been engaged in their healthcare and are offered no coping skills.
The inevitable involvement of the patient’s parents and siblings can act as a catalyst for behavioral problems of the family’s own, namely depression, social isolation, anxiety, and jealousy. Siblings can easily become jealous of the attention the parents or other authority figures in their lives give to the ill child, and the amount of family time that is concentrated on his or her medical care. The optimal family situation provides comfort and openness in discussing the child’s disorder, while maintaining both the child and family’s identity. University of Minnesota Professor Joan Patterson’s Family Adjustment and Adaptation Response (FAAR) Model holistically captures this idea within nine basic criteria.2
1) The family must balance the child’s illness with the family needs and interests of other family members.
2) Parents should maintain clear family boundaries between themselves and their children in regards to discipline, bed times, and household chores.
3) Parents and their children can develop communication competence by ensuring that the child understands important medical information that pertains to them. Through modeling effective communication competence, parents will be able to instill in their children the ability respond to others’ needs and with the appropriate words or actions.
4) The family should try to attribute positive meaning to their situation by focusing on newfound responsibilities instead of their fear of the disease itself.
5) Parents should demonstrate flexibility by allotting as much time to family activities (like a sporting game) as to medical obligations. A child whose family is able to ‘go with the flow’ will experience life as a “normal kid” as opposed to a young adult dealing with an illness.
6) Parents must commit to the family as a unit, remaining a “Mom and Dad” as opposed to separating the sick child from his or her caregivers.
7) The family can engage in active coping skills, including modeling positive ways to deal with stress aloud, using realistic thoughts, employing muscle relaxation techniques, and healthy sleep patterns.
8) The family must maintain social integration within their own household and the neighborhood/community.
9) The family needs to develop collaborative relationships with professionals, including doctors, in order to make decision that is satisfactory to everyone involved in the child’s care, especially the patient. This is a key factor to the child’s immediate and long-term success. The medical professional relationship leaves much to be desired in terms of listening to the child’s needs and wishes. Establishing a collaborative relationship builds a safe relationship where a child can express their desires and fears of their treatments to choose the best therapy.
While the FAAR model holds great potential to engage an entire family affected by chronic illness in order for it to be as successful as possible, it mostly is maintained as a theory and not necessarily as a treatment protocol. However, the best implementation would be giving the parents the guidelines to this model and having an initial therapy session to initiate the practice. As simple as it seems, the most effective behavior is openness between parents and children, while maintaining the boundaries of parent and child, to ensure that each child gets equal treatment and attention by the parents. This behavior exhibited by the parents is expected to be mirrored by the children, thus eliminating further stress and issues such as sibling jealousy.
Think About It
Interestingly, the most important indicator of a patient’s prognosis is not the diagnosis, but rather the individual’s innate attitude. This attitude derives mainly from how the child feels about the disease without any outside influence, but parents and/or other important figures’ reactions play a huge role in how a child internalizes their ailment. For example, a patient diagnosed with mild asthma who is deeply in denial and very depressed has a poorer prognosis and quality of life than a patient with severe asthma who has a positive outlook. The evidence indicates not only the mind exacerbating physical symptoms, but also a strong interplay between stress and immune functioning. Factors such as a positive attitude, an effective social community, and cognitive behavioral stress management strategies are consistently linked to better immune functioning. Conversely, chronic psychological stress, fear before surgery, and coping strategies of denial or loss of control are associated with poorer immune functioning. It deserves to be stressed that the relevance to the pediatric population is that children have not yet had the natural – or facilitated – opportunity to develop successful coping strategies and social support of their own. Thus, such a system that accommodated for the unique position of this population has the potential to optimize their care and success.
Children with chronic autoimmune illnesses face a unique battle. In many ways, they are as not fully autonomous or even aware of what is coming due to their present stage in life. Children do not always understand the extent of their illness as easily as they can perceive that they are “different” from their peers.3The chronic cyclic nature of an autoimmune disease comes at the cost of a child’s education, as severe symptoms may force the child to miss school or even be hospitalized. Diseases such as Crohn’s, which requires a child to make frequent emergency trips to the bathroom, may not only become emotionally embarrassing, but functionally problematic during school or other activities. These children appear in the center of their medical world as both a dependent third party and the one to endure the treatment. In addition to the power that is stripped from them by their diseases, these children must face the struggles of daily life and peer groups who are rarely empathetic to their plight.
All Together Now
“The FAAR model should provide great support for the ill child and the family.In many ways, fragments of the FAAR model have already been implemented within family relationships. Cognitive Behavioral Therapy (CBT) is often the first therapy used to readjust a person’s thinking into a more adaptive style to interact with the world and their emotions in a more relaxed manner. It employs ‘realistic thinking’ to help the individual see what will most likely happen instead of the improbable anxieties the person presents in therapy. In a disease like arthritis that often requires physical therapy, this is often done in a group format. The proposed therapy works like a group CBT arrangement, aiming to address the importance of social support with peers and the optimal cognitive style, coping skills, and stress management. Within the family setting, the use of the FAAR model should provide great support for the ill child and the family at large. On the side of the medical professionals, it is critical to the process for education about using child-friendly language and helping to preserve the childhood of their patients.
Although children with autoimmune diseases generally end up living different lives than their peers, at the heart of their care needs to be a reminder that they are indeed children and every opportunity should be made available to them. Providing children with CBT and coping strategies coupled with the FAAR intervention, their healthcare can be optimized to foster and sustain their resilience as they develop and face new and challenging medical situations. While no family dreams of having a child with a chronic illness, just because their life is not ideal, does not mean it cannot be a beautiful one.
1American Autoimmune and Related Diseases Association, Inc.
2Patterson, J. M. (2002). Understanding Family Resilience. Journal of Clinical Psychology, 58(3), 233-246.
3Patterson, J., & Brown, R. W. (1996). Risk and Resilience among Children and Youth with Disabilities.Archives of Pediatrics & Adolescent Medicine, 150, 692-698.
Dogs are “man’s best friends,” but do we know about their origins?
Dogs have been beside us for thousands of years, but until very recently, little was known about both their genetic origins and their domestication process. Researchers around the world have been investigating dog genes from mitochondrial DNA (mtDNA), to compile complete genomes. The research has also given important insight towards the origin of the first dogs and the specific genes that cause diseases that both humans and dogs share, such as diabetes.
The Origin of Species
Until canine gene research began, many people assumed that dogs were close descendants of wolves that were domesticated by primitive humans to aid in hunting about 15,000 years ago.1 Through gene research, scientists have been able to clearly prove that dogs are, in fact, straight descendants of the grey wolf (Canis lupus), and were domesticated as early as 45,000 to 135,000 thousand years ago. The domestication process of dogs began from separate and distinct populations throughout East Asia, which then interbred and backcrossed. Backcrossing, the breeding between an individual with its parents or siblings, allowed generations to develop homogenetic breeds. Researchers distinguished four maternal clades, or matriarchal lines, according to genetic differences. Each clade indicates the particular origin of a breed or group of breeds based on the breed’s genetic background, specifically the genetic of mtDNA. The largest and first clade holds the genes of most known breeds, further supporting the fact that dogs may have been domesticated prior to the dates that archeological records indicate. The other three clades were formed afterwards and encompass more specific and unique dog breeds. However, each clade gives insight into the original maternal lineage of modern dog breeds.2
Dogs are one of the first examples of human manipulation of nature. Primitive humans initiated the evolution of dogs by breeding specific phenotypes of wolves, even without our modern knowledge of genetics. Through selection against genes for aggression and other traits predominant in wolves, the first domesticated generation of “dogs” began an evolutionary change. Subsequent generations had less harsh or menacing features; limbs and bone structure became smaller and new traits such as barking, a unique trait that is distinct from a howl because of tone and pitch, emerged.1 Dog breeds were then created by humans who wanted a specific kind of dog: a herder, a hunter, a guard, or a guide. Scientists have determined that specific dog breeds developed because of controlled breeding and possibly pre-natal modifications, although the real mechanics of how dog breeds specifically emerged are still unknown. Through very close and controlled breeding, different breeds continued to evolve. Tracing the lineage of each specific modern breed is complicated, however, because mtDNA can only show the mutations that occurred in the early stages of domestication. Any modifications of genes found in mtDNA are those that occurred prior to the creation of modern breeds. In other words, the full extent of gene modification in modern breeds is still very much unknown and is currently being researched.2
To determine the lineage of individual breeds, loci research, which is based on the location of genes on chromosomes, has to be implemented. Loci research detects the genetic changes that might have occurred because of a genetic drift, of movement of a gene from one chromosome to another.2 This allows researchers to see divergence in allele frequencies and distinguish between breeds. Further research on loci has shown that dog breeds can be clustered by ancestry. Several clusters have already been determined: K2 includes all breeds of Asian origin, such as the Akita and the Shar Pei; K3 includes mastiff-type dogs, such as the Bulldog and the Boxer; and K4 includes working-type dogs and hunting breeds, such as the Collie and the Sheepdog.2
Mapping the Genome
Early research began in the late 1990s, but advances were minimal because genetic research was predominantly focused on humans and mice. By 2004, however, scientists had created a fully integrated radiation hybrid map of a dog genome. An integrated hybrid map is a genome map made from DNA fragments which are divided through radiation. These fragments are then synthetically injected and reproduced inside a hybrid cell made from the DNA of two individual species.2 This allowed further research initiatives to identify specific breed genomes and to create a comparative map between human and dog genes. In 2005, the largest canine sequence, the Boxer genome, was made public. It revealed that dog genes replicate at lower rates than humans and the deletions or insertions of nucleotide bases are rare. Following this discovery, mapping of other breeds became easier because similar sequences, certain genes that are linked together, divide dogs into two groups with similar gene structures. One group probably developed from the first domestication and the second from specific breeding. Therefore genome-wide association mapping can be used for later construction of breed genomes.2
Benefits of Canine Genome Research
Because the canine genome has fewer reported genes than the human genome, and is more primitive in content, it allows for an easier insight into disorders present in both human and dogs, such as cancer, diabetes and narcolepsy. Diseases linked to specific dog genes and loci can help identify similarly linked genes in humans. This advancement allows human biologist to identify the mechanisms that may cause certain diseases, and to identify the interaction between genes and their effects on diseases. For example, the discovery of the gene that targets narcolepsy on dogs has indicated how certain genes in the human body relate to our sleep patterns and disorders. It has also opened a new approach to cancer research. By identifying the sources, origins, and developments of cancer and tumors in dogs through canine genes, researchers may understand the predisposition and susceptibility of genes to cancer.
1 Dogs Decoded: Nova. Dir. Dan Childs. Nova, 2010. Film.
2 Ostrander, E. & R. W. (2005): The canine genome: genome research. 15: 1706-1716. Web. <http://genome.cshlp.org/content/15/12/1706.full>.
Tagged as: DNA, canine, breeding, genome
Ways to support new ostomy patients may reduce post-operative body image issues. More