A bêtavoltaïque generator is a still experimental electric generator, in 2011, which draws its energy from a radioactive emission of particles β-, that is to say electrons. A radioactive isotope of hydrogen, the tritium, one of the sources is mainly studied. Contrary to the majority of the nuclear sources, which use a nuclear reaction to generate the energy which will be transformed into electricity, the bêtavoltaïques ones rest on a process of transformation not-thermics.
The generators bêtavoltaïques were imagined in the middle of last century. In 2005, a new using material of the diodes containing porous silicon was proposed to increase their effectiveness. This increase would be mainly due to the increase in the surface of material of capture.
They are still experimental devices producing very little energy to cost prices very high and which currently do not have any application. Their electromotive force comes from the electrons produced by the reactions of disintegration β-, the electric output provided being directly function of the intensity of the radioactive decay and the disintegration energy to work in these components: this electromotive force is structurally limited by the reduced size of these components, which prevents it from being able to become significant.
The idea, popularized a few years ago by certain reviews and Web sites, that these generators are mainly intended for the portable devices, such as the mobile phones, the laptops, even the pacemakers, remains premature and seems for the moment very little realistic: these apparatuses use batteries today having an endurance of about a day or week to the maximum, and which thus require to be reloaded very frequently, which limits their portability, of a bêtavoltaïque pile then was presented by the promoters of this technology like making it possible to gear down the utilization period without recharging, which could the use, according to them, to reach 30 years within the framework of a domestic use. Several potential advantages are proposed to justify research in this direction.
The nuclear transformation is not exothermic, contrary to the chemical reactions of the traditional piles.
A bêtavoltaïque pile at the end of the lifetime does not contain any harmful element, contrary to heavy metals or not-which can be recycled contents in the ordinary piles.
The particles β- used release a small quantity of energy and are easily insulated, compared to the rays γ generated by more radioactive apparatuses, so that these generators should not emit wave nor of dangerous particle.
However, the generators bêtavoltaïques tend to undergo internal degradations due to the β- radiation which decrease the output gradually by it, while the energy source itself dries up disintegration of tritium progressively.
The company BlackLight Power announced Wednesday to have produced several million Watts of electricity in its laboratories thanks to a revolutionary technology, in authority of patent, called “Solid Fuel-Catalyst-Induced-Hydrino-Transition (SF-CIHT)”.
The use of a specific solid fuel, containing water, confined by the two electrodes of a pile SF-CIHT, and to which an intensity of 12.000 amps is applied, ignites water by producing an extraordinary electric flash.
The fuel is thus able to be continuously provided to the electrodes for continuously producing electricity.
BlackLight produced million Watts of electricity in a volume representing ten-thousandths of liter, which represents an amazing density power density exceeding 10 billion Watts per liter.
As comparison, one liter of source of BlackLight electricity could produce electricity as much than a powerplant equipped with a power higher than that of the 4 old engines of the nuclear plant of Fukushima Daiichi, theater of one of the greatest nuclear disasters of the history
Sure and not polluting, the system of production of electricity not converts by catalysis hydrogen present in the solid fuel containing water (H2O) into a product polluting, a hydrogen particle with weak energy called hydrino while making it possible the electrons to reach a lower orbit around the core of the atom.
The energy released by the fuel containing water (H2O), freely available in the form of moisture in the air, would be 100 times higher than that of a quantity equivalent of gasoline to high octane number.
Electricity is produced in the form of plasma, an ionized gas physical status of the fuel, expanding supersonic, primarily consisted of positive ions and free electrons being able to be converted directly into electricity thanks to magnetohydrodynamic converters with high output.
By using components easily available, BlackLight designed a technical model of closed electric generator, except for the fuel addition containing water (H2O), able to generate 10 million Watts of electricity, to supply ten thousand hearths enough.
The source of electricity SF-CIHT would be a revolutionary innovation for all the forms of transport automobile, road transport, railway and maritime, aviation and aerospace, since its density power density is a million times higher than that of the engine of a Formula 1 and ten million times higher than that of a reaction engine.
The solid fuels and electrochemical pile CIHT of BlackLight use the same catalytic agent that new piles SF-CIHT and were used as model to Doctor Mills at the time of the invention of producing pile SF-CIHT of plasma.
The results obtained by BlackLight, several times superiors with the theoretical maximum release of energy for the representative solid fuels, were retorted in the application laboratory of Perkin Elmer, on the site of the company, with its commercial instrument. Moreover, electrochemical pile CIHT was also retorted in an independent way, out of these buildings.
The hydrinos are the shape previously unknown of hydrogen, in lower states of energy, produced by BlackLight Process during the release of latent energy contained in the hydrogen atoms.
The energy released by the formation of a hydrino is more than 200 times higher than that which is necessary to extract hydrogen from water by electrolysis. It appears that the piles of technology CIHT extract this energy directly as an electricity. With the diversion of a fraction of the percentage of electric output, the hydrogen fuel can be created starting from water with the clear production of energy on the spot. Consequently, this energy can be produced everywhere, including in the housing units, the companies and the cars, without infrastructure of support for the generation or the distribution of fuel. The thermics energy sources of BlackLight are also independent of a fuel infrastructure and have additional advantages compared to the centralized traditional production of electricity in the applications of electrical energy distributed as much as in the operations of thermal rehabilitation.
Technology CIHT has a nominal cost per foreseeable unit of power compared to the thermal systems and it makes it possible to produce electricity without requiring enormous mechanical generators supplied with a thermal energy source. Also, a faster diffusion is it possible, by deploying a great number of units distributed in an autonomous way, which make it possible to circumvent the enormous obstacles which complicate the entry on the energy markets, such as the development and the construction of gigantic powerplants about the billion dollar with the infrastructure of distribution of the energy which is associated there.
The energy source can be transported and connected to your electric control panel in order to feed your house in sufficiency and to produce moreover a surplus of electricity allowing to also feed your vicinity, Dr. Mills added. Akridge Energy, a holder of license, hopes on the one hand to deploy units of electric output based on technology CIHT, according to a decentralized delivery system, in commercial real estate, and on the other hand to sell electricity with its tenants and later on with the local electrical communications.
Luminous signature of the hydrino
BLP also announced the reproduction of an emission of light to exceptionally high energy, lower than 80 nanometers, starting from hydrogen in Harvard Smithsonian Center for Astrophysics. These results, considered impossible before on the basis of former theory, would be explainable by the formation of the hydrinos. The direct spectral observation of the transitions from hydrogen to form the hydrinos as their astrophysical presence generalized as an identity of the black matter of the universe gave place to the publication of an article entitled Hydrino Continuum Transitions with Cutoffs At 22.8 Nm and 10.1 Nm (Int. J. Hydrogen Energy) written by Dr. Randell Millets and Dr. Ying Lu.
Thermoelectric generator with radioisotope
A thermoelectric generator with radioisotope is a nuclear electric generator of simple design, producing electricity starting from heat resulting from the radioactive material decay rich in one or more radioisotopes, generally of plutonium 238 in the form of plutonium 238PuO2 dioxide. Today, heat is converted into electricity by Seebeck effect through thermocouples: the generators produced at last century used materials silicon-germanium, those currently produced implement PbTe ⁄ TAGS junctions rather, with an energy efficiency never not reaching 10%. To improve these performances, current research is directed towards thermionic converters and Stirling generators with radioisotope, likely to multiply the total output by four.
Such generators are implemented in astronautics for the power supply of the space probes, and more generally to feed in electricity of the equipment requiring a stable and reliable energy source able to function continuously over several years without direct maintenance - for example for applications military, underwater, or in inaccessible medium, one had thus designed miniature generators for cardiac pacemakers with the 238Pu, now replaced by greener technologies resting on batteries lithium-ion, and such generators of simpler design running on strontium 90 were used in the past for the lighting of certain headlights isolated on the coasts from the USSR.
Photograph of the generator with radioisotope of probe cassini
Diagram of the GPHS-RTG Ulysses probes Galileo Cassini-Huygens New Horizons
Reddish glow of one 238PuO2 pelletizes under the effect of its characteristic radioactive decay.
Source of heat
Compared to other nuclear equipment, the principle of operation of a generator with radioisotope is simple. It is composed of a source of heat made up of an armor-plated container filled of radioactive material, bored holes where are laid out of the thermocouples, the other end of the thermocouples being connected to a radiator. Thermal energy crossing the thermocouples is transformed into electrical energy. A thermoelectric module is a device made up of two kinds of conducting metals, which are connected in closed loop. If the two junctions are at different temperatures, an electric current is generated in the loop.
The radioisotope selected must have a rather short half-life, in order to provide a sufficient power. One chooses half-lives about a few tens of years. It is generally about plutonium 238, in the form of plutonium 238PuO2 dioxide, a powerful transmitter of particles α whose radioactive half-life is 87,74 years. This isotope is used because, in addition to its half-life adapted particularly well, it emits all its radiation in the form of particles α, more effectively converted more by far into heat than the particles β- and a fortiori that the rays γ.
The first radioisotope used was polonium 210 because of its shorter period of only 138,38 days and thus of its very great power of radiation, while americium 241 offers a less powerful alternative but five times perennial because of its 432,2 years period:
Calculated decrease of three radioisotopes for GTR
106 W ⁄ kg
567 W ⁄ kg
140.000 W ⁄ kg
PuO2 à 75% de 238Pu
Po à 95% de 210Po
97,0 W ⁄ kg
390,0 W ⁄ kg
133.000 W ⁄ kg
After 1 month
97,0 W ⁄ kg
389,7 W ⁄ kg
114.190 W ⁄ kg
After 2 months
97,0 W ⁄ kg
389,5 W ⁄ kg
98.050 W ⁄ kg
After 6 months
96,9 W ⁄ kg
388,5 W ⁄ kg
53.280 W ⁄ kg
After 1 year
96,8 W ⁄ kg
386,9 W ⁄ kg
21.340 W ⁄ kg
After 2 years
96,7 W ⁄ kg
383,9 W ⁄ kg
3.430 W ⁄ kg
After 5 years
96,2 W ⁄ kg
374,9 W ⁄ kg
14 W ⁄ kg
After 10 years
95,5 W ⁄ kg
360,4 W ⁄ kg
0W ⁄ kg
After 20 years
93,2W ⁄ kg
333,0W ⁄ kg
0W ⁄ kg
After 50 years
89,5W ⁄ kg
262,7W ⁄ kg
0W ⁄ kg
The isotopes 242Cm and 244Cm were also proposed in form cm2O3 because of their particular properties:
Curium 242 gives plutonium 238 by disintegration α with one 162,79 days radioactive half-life, which in fact potentially a radioisotope with double relaxation since its decay product is in its turn usable like energy source for generator with radioisotope.
Curium 244 gives plutonium 240 by disintegration α with one 18,10 years period.
With a fuel rating respectively of 98 kW ⁄ kg for the 242Cm2O3 and kW ⁄ kg for the 244Cm2O3, this ceramics presents nevertheless the disadvantage respectively of emitting an important flow of neutrons because of a rate of spontaneous fission of 6,2×10-6 and 1,4×10-6 by disintegration α, which requires a shielding several tens of time heavier than with the 238PuO2.
Conversion into electricity
The thermoelectric elements currently used to convert into electricity the variation in temperature generated by the disintegration of radioisotopes are particularly not very effective: between 3 and 7% only, never not reaching 10%. In the astronautics field, these thermocouples were made a long time out of materials silicon-germanium, in particular in the GPHS-RTG of the probes Ulysses, Galileo, Cassini-Huygens and New Horizons. The new generation, introduced by the MMRTG for the mission Mars Science Laboratory, functions with a junction known as PbTe ⁄ TAGS, that is to say telluride PbTe lead/tellurides of Sb2Te3 antimony, GeTe germanium and of argent Ag2Te.
More innovative technologies resting on the thermionic converters would make it possible to reach an energy efficiency ranging between 10 and 20%, while experiments resorting to thermophotovoltaïques cells, laid out outside the generator with traditional radioisotope equipped with thermoelectric elements, could theoretically make it possible to reach outputs close to 30%.
The Stirling generators with radioisotope, using a Stirling engine to generate the electric current, would make it possible to reach an effectiveness of 23%, even more by amplifying the heat gradient. The principal disadvantage of this device is however to have machine elements moving, which implies to have to manage the wear and the vibrations of this system.
A thermoelectric generator with radioisotope is particularly well adapted to the production of a stable power supply, over one long life, and to maintain operational for several years the instruments embarked in the interplanetary probes. Thus, the generator embarked on the New Horizons probe is able to provide a stable power supply of 200 W on more than 50 years. At the end of two centuries, the power falls to 100 W. However, because of plutonium 238 present in a space GTR, any failure on the takeoff of the launchers used to propel the probe presents an environmental risk.
The generators with isotope were mainly designed for space exploration, but the Soviet Union used them to feed from the headlights isolated using generators with strontium 90. This last is appreciably less expensive than other traditional radioisotopes, but emits radiations β almost exclusively, at the origin of a strong X-radiation by Bremsstrahlung. That did not pose an main issue taking into account the fact that these installations were intended for the isolated and not very accessible places, where they provided a highly reliable energy source, but presented all the same potential risks in the event of incident or of degradation of these materials without brought closer monitoring. Thousand of generators of this type, using, fluoride SrF2 strontium even titanate SrTiO3 strontium, plus any is not today in a position to function with an acceptable power following the exhaustion of radioisotope.
Strontium 90 has one 28,8 years radioactive half-life (what means that half of the 90 Sr remains after 28,8 years, the quarter after 57,6 years, etc), while disintegrating by disintegration β- to give yttrium 90, which disintegrates in its turn by emission β- with a 64 hours half-life for finally giving zirconium 90 which, is stable for him.
The generators with isotope do not function like the nuclear plants.
The nuclear plants create energy starting from a chain reaction in which the nuclear fission of an atom releases from the neutrons, which in their turn involve the fission of other atoms. This reaction, if it is not controlled, can quickly grow exponentially and cause serious accidents, in particular by the cast iron of the eactor.
Inside a generator with isotope, one uses only the natural radiation of radioactive material, that is to say without chain reaction, which excludes a priori any scenario catastrophe. The fuel is in fact consumed in a slow way, that produced less energy but this production is done over one long period.
Even if the risk of major catastrophe is quasi null, one is however not safe from a contamination radioactive and chemical because all the isotopes of plutonium and other transuranians are also chemically toxic: if the launching of a space probe fails at low altitude, there is a risk of contamination located, just like in the upper atmosphere, a disintegration of the probe could generate a dissemination of radioactive particles. One counts several accidents of this type, including three, the American satellite Transit 5BN-3 and 2 Russian probes of which the mission Cosmos 305, having led to the release of radioactive particles in the atmosphere. In the other cases, no contamination could be detected, and certain generators with isotopes were recovered intact, having resisted the repercussion in the atmosphere.
The first electric generator supplied with a virus
The scientists of Lawrence Berkeley National Laboratory in the United States developed a method to produce electricity by using inoffensive viruses able to convert the mechanical energy into electricity.
The researchers developed technology by the virus bacteriophage M13, a virus which attacks only the bacteria and is thus without danger to the people. A project which could allow according to the gadgets the near future as the smart phones which can be charged starting from our own movements like the functioning.
It is a generator which can produce the power necessary to light a liquid crystal display. Its operation is produced by the pressure of small electrode, the switch would be covered by a thin layer of conceived virus, able to produce an electric charge.
It is about a first stage towards the development of generators personal for a use in nano-devices and other electronic devices based on mechanisms of virus.
The proteins received thereafter, thanks to genetic engineering, four amino groupings at their negative end, the objective being to increase the difference in load between the two poles of each helicoid polypeptide chain. The researchers continued their efforts and managed to deposit to twenty layers of virus the ones on the others, thus doping the power of the generator. Electrodes covered with gold and settings in contact with the bacteriophages are given the responsability to transfer the generated current. A simple pressure then makes it possible to produce electricity.
Scientists tested their approach by creating a thin paper generator which produces sufficient current to exploit small a liquid crystal display.
A film of virus, observed thanks to a microscope, was subjected to an electric current. The few 2.700 proteins of surface recovering each biological entity then changed form, thus answering waitings of the researchers. Indeed, a piezoelectric material becomes deformed when it is crossed by electricity.
The M13 bacteriophage measures 880 nanometers length and has a diameter of 6,6 nanometers. It is covered with 2.700 proteins charged (on the right; coat proteins). Their deformation thanks to a mechanical process makes it possible to generate current. Considering its dimensions, this hardware of biological origin could be used to design nanogénérateurs.
a generator of one square centimeter produces a current of a tension of 400 mV, that is to say the quarter of an AAA batterie, and an intensity of 6 nanoampères.
With silicone, the body can produce electricity
Walk to recharge his cell phone and breathe to power his pacemaker. Flexible piezoelectric, the invention converts mechanical energy, so the motion into electricity.
Researchers at Princeton University have succeeded in creating a strip that combines the flexibility of a silicone and the powerful effect of piezoelectric lead zirconate titanate, or PZT simply.
This PZT is a ceramic which converts 80% of the mechanical energy that it receives when it is deformed into electrical energy, outstanding performance for a piezoelectric material. As the body releases less energy during its movements, it is important that this conversion rate is high. The PZT is 100 times more efficient than quartz, another piezoelectric material said Michael McAlpine, a mechanical engineer at Princeton.
The silicone also has the advantage of being flexible.
After production of high temperature ceramic PZT, they extracted chemically by micro-etching of nanoribbons. They were then embedded in silicone sheets.
A circuit piezoelectric silicon as the designers call it flexible electrical nanogenerator.
Other benefits of their creation, it is biocompatible and adaptable in size while being producible by using printing techniques microelectronics.
Its biocompatibility, that is to say, his lack of harm to the body and the absence of rejection, for example, would implement this generator to power around the lungs a pacemaker. The simple movement of the chest during respiration may indeed be sufficient to generate the necessary electric current.
According to its designers, the excellent performance of the assembly of piezoelectric ribbons coupled with flexibility and biocompatibility of silicone could open a boulevard in the basic and applied research.
With the proliferation of smart textiles, solar cells and batteries printable, this material could indeed promise of interesting associations.
The road of the future will generate electricity
Innowattech, an Israeli start-up will test soon its piezoelectric technology on 100 meters of bitumen.
The technology of the name of IPEG for Piezo Electric Generator will make use of piezoelectric thousands of crystals integrated into the road in order to recover a certain quantity of energy. Not only, these crystals are able to exploit mechanical energy resulting from the weight and the movement of the vehicle, but they can also recover the energy of the vibrations and the changes of temperature.
According to a scientist calculation, 500 trucks passing out of one kilometer of road, at a mean velocity of 72 kilometers per hour can produce 200 kWh per hour. It is enough of electricity to ensure the average consumption of 300 families
It is estimated today that the methane hydrates of the oceanic funds contain twice more in carbon equivalent that totality of the coal and oil, natural gas layers of known mondialememt. Along the only south-eastern coast of the USA, a zone of 26.000 square kilometers contains 35 WP (gigatonnes = billion tons) of carbon, that is to say 105 times the natural gas consumption of the USA in 1996! The chart which follows, extracted from Sweat, Bohrmann, Greinert and Lausch, shows the distribution of the known layers of methane hydrates in the world.
The yellow points indicate the layers on the plates or the continental slopes, the red rhombuses, the layers in permafrost (permanently cold ground).
Under conditions of particular temperature and pressure, ice (H2O) can trap gas molecules, forming a kind of cage imprisoning the gas molecules. One calls the compounds resulting from the gas hydrates or from the clathrates. The trapped gases are varied, of which carbon dioxide (CO2), the hydrogen sulfide (H2S) and methane (CH4).
These crystalline cages can store very large quantity of gas. The case which interests us here is that of the methane hydrate, an ice which contains an enormous quantity of gas: the cast iron of 1 cubic centimeter of this ice releases up to 164 cubic centimeter of methane!
Origin and stability of the methane hydrates
An important quantity of organic matter which settles on the oceanic funds is built-in the sediments. Under the action of the anaerobic bacteria, these organic matters are transformed into methane in the first hundreds of meters of the sedimentary pile.
A very important volume of methane is thus produced. Part of this methane combines with the molecules water to form the methane hydrate, in a well defined fork of temperature and pressure. You will find diagrams more complete with bonds Internet advised at the end of this page.
In the zone in gray, water and methane combine to form a hydrate with the state of ice, whereas outside this zone, the two made up ones are separate and are under their own state, liquid and gas. It is to say that the methane hydrate is stable under the conditions of temperature and pressure expressed by the zone in gray, and unstable under the conditions outside this zone. For example, a methane hydrate which is in the oceanic sediments by 600 meters basic with 7°C is stable; it will become unstable with an increase in temperature of less 1°C. To become unstable means that the ice melts and releases its methane at a rate of 164 cubic centimeters of gas per centimetre of ice.
One finds the methane hydrates in oceanic medium, mainly with the margin of the plates and on the continental slopes, but also with lower depth in the very cold areas, as in the Arctic.
The margin of the continental shelves and the slopes constitute a zone privileged to accumulate the methane hydrates because it is there that the greatest quantity of oceanic organic matters settles.
One finds also hydrates of methane in permafrosts, that is to say in this layer of the ground frozen permanently, even during the periods of thaw on the surface. The great volume of terrestrial organic matters accumulated in the grounds is transformed into methane biogenic which, in contact with water is trapped in hydrates. The pressures are low there, but the very cold temperature, well below 0°C.
As the conventional hydrocarbon reserves become exhausted, one will have to fold back oneself on the reserves known as not-conventional, like the layers of the areas distant and from expensive exploitation, bituminous sands and perhaps a day, the hydrates of methane. As mentioned above, the methane hydrates of the oceanic funds constitute an enormous energy reserve, but for the inaccessible moment. This methane ice is, either in the interstices of the sediment between the particles of sand or clay cementing the latter or in the form of blisters in the sediments, or in layers of several millimetres or centimetres thickness parallel with the layers or in veins recutting them.
The methane hydrates are thus dispersed in the sediments and cannot be exploited by conventional drillings; it would rather be necessary to think of a massive exploitation of the sediment using dredgers like one makes it for example to clean the channels of navigation of sands and muds, or of a sophisticated system of pumping of the sediment. But here is an enormous risk to destabilize the hydrates quickly and to release from the considerable quantities of methane in the atmosphere, without counting the probable accidents associated with this kind with exploitation. It does not remain about it less than oil industry salivates with the thought to have perhaps an access day to such reserves.
A massive destabilization of the methane hydrates caused for example by an increase of 1 or 2°C of the temperature of the oceans, which is completely compatible with the current models climatic, risk to produce a catastrophic increase in atmospheric gases with greenhouse effect. Such a destabilization could also cause immense underwater landslides on the continental slope, involving very important tsunamis which would affect the bordering populations. It could be there two of the catastrophic effects of the current climate warming caused by an increase in atmospheric gases with greenhouse effect. Methane eat 21 times more effective than CO2 like gas with greenhouse effect!
The waste water future energy source
Inexhaustible energy: salt water, of fresh water, the organic matter and the quite selected bacteria: and here is a device able to produce hydrogen gas. For the experimental and expensive moment, the process could be improved
This system can produce hydrogen everywhere where there is dirty water near sea water: thus Bruce Logan, of the university of Pennsylvania summarizes the possibilities of its MREC, translatable by cell of electrolysis by microbial opposite electrodialysis. The idea is to carry out hydrolysis to produce hydrogen gas, which is a good vector of energy, by using a double source of free produced electricity: the opposite electrodialysis and of the bacteria.
There exist indeed bacteria which reject electrons when they eat. Bruce Logan is interested in the use of waste water since 2004 and had described in 2005 a biological version of the combustible battery, in which organic matter is degraded in a populated compartment of bacteria, and able to produce, a little, of electricity. But the production of a MFC is too weak for hydrolysis. The idea exists to produce hydrogen by this means, but one needs a source of complementary electricity, therefore external. Preceding work shows that such systems can generate 0,3 volt per cell whereas one needs 0,414 for hydrolysis of them.
As for electrodialysis, it is a process known to extract the ions from a liquid using an electric current. With a set of membranes letting pass only the positive ions for the ones and the negative ones for the others. Not very effective, the method is hardly appraisal. The opposite reaction is possible: with two sources of water of salinities different crossing from the membranes alternatively positive and negative, one can produce electricity. Fresh water and sea water can thus become a generator of electricity. But there still, the result is poor and insufficient for hydrolysis. One would need 25 pairs of membranes to obtain the tension of 1,8 volt necessary to reach that point, explain the researchers, which would ruin the profitability of the apparatus because of the consumption by pumping of water.
The idea of professor Logan was to add these two methods. Nourished organic matter bacteria, produce electric current, which is supplemented by that resulting from opposite electrodialysis. It is enough to add fresh water, salt water and some membranes
In the description of the device exposed in the Pnas review, the team announces modest but real results. Their apparatus produces 0,8 1,6 cubic meter from hydrogen gas per day for 1 cubic meter of water used. The output lies between 58 and 64% with the systems tested. They use platinum electrodes, which burdens considerably the cost with the process. But the American researchers are persuaded that a material cheaper could be appropriate. They quote molybdenum sulfide, which would descend the output only up to 50%.
The process, moreover, remains complex, and is far from being at the point. But it seems astute and even promising. To locally produce hydrogen starting from waste water or of organic waste to charge with the combustible batteries could in the long term of a phse research and development become promising
To produce electrical energy thanks to photosynthesis
Researchers of CNRS managed to develop a biopile functioning starting from the photosynthesis of the plants. This device, which converts solar energy and chemical into electrical energy, could have medical applications.
The researchers of the Research center Paul Pascal of CNRS, in Bordeaux, developed a biopile which functions starting from glucose and of the dioxygene (O2), two products resulting from the process of the photosynthesis by which the plants convert solar energy into chemical energy. During this process, in the presence of visible light, carbon dioxide (CO2) and water (H20) are transformed into glucose and dioxygene (O2) in a complex series of chemical reactions.
Diagram of the biopile.
This biopile is made up of two electrodes modified with enzymes: glucose oxydase (GOx) with the anode and the enzyme bilirubine oxydase (BOD) with cathode. Technically, to the anode, the electrons are transferred from glucose towards glucose oxydase (GOx), from GOx towards polymer I then polymer I, towards the electrode. To cathode, the electrons are transferred towards polymer II, then polymer II towards the bilirubine oxydase (BOD) and finally of BOD towards O2.
To convert solar energy into electrical energy After, having established this pile in a cactus, the researchers of CNRS could in vivo follow the evolution of photosynthesis in real-time. They could observe the increase in the electric current when that a lamp is lit and a reduction when this one is extinct. The researchers thus showed that a biopile established in a cactus could generate a power of 9µW (microwatt) per cm2. The output being proportional to the intensity of lighting, a more intense illumination accelerates the production of glucose and of O2 (photosynthesis), there is thus more fuel to make function the biopile, explains CNRS.
A device with medical vocation so with very long run, such a device could contribute to produce electrical energy '' in a way ecological and renewable, the objective of this work is before very medical. In this case, the biopile would function then under the skin of way autonomous (in vivo) by drawing chemical energy of the couple oxygen-glucose naturally present in the physiological fluids, explains CNRS. It could thus feed from the established medical devices, such as, for example, of the subcutaneous autonomous sensors measuring the rate of glucose among patients diabetics.
To produce electricity the night with the solar one
The solar panels can from now on produce electricity the night, without sunlight. It is a new technology of solar panel which produces hydrogen gas at the same time as electricity during the day and the night this hydrogen supplies the combustible batteries.
Dr. Mikhail Zamkov of the university of Green Bowling to Ohio and his team used two nanocristaux synthetic inorganic for the manufacture of the solar panels more durable and able to produce hydrogen gas. The stem in nanocristal produces hydrogen by photo catalysis. The nanocristaux synthetic ones produce as for them electricity starting from the sunlight and as they are inorganic, they are easier to reload and less sensitive to heat than their organic counterparts.
The first nanocristal is in the shape of stem, which allows the separation of load necessary to produce hydrogen gas, a reaction known under the name of photo catalysis. The second nanocristal is composed of piled up layers and generates electricity because it is photovoltaic. The main advantage of this technique is that it makes it possible to couple the absorption of light and the photo catalysis.
The two new types of nanocristaux have the capacity to replace the traditional organic molecules in the construction of the solar panels. At present, solar new technology is at the stage of research. The researchers think that this new technology could be produced in mass. They also think that technology could be an excellent alternative to the current methods of collection of solar energy.
To produce electricity starting from alive plants
According to the statistics, the wetlands recover approximately 6% of terrestrial surface. A new technology of microbial combustible batteries for plants developed at the University of search for Wageningen in the Netherlands could transform these zones into a viable source of renewable energy. More than that, the developers estimate that their technology could be used to provide electricity to the distant communities and to use the green roofs to provide electricity to the households.
The microbial combustible batteries produce electricity while the plants continue to grow.
It functions by using the 70% of the organic matter produced by photosynthesis, not used by the plant and which is excreted by the roots. As natural bacteria of origin are around the roots to absorb these organic residues, from the electrons are released as a waste. While placing an electrode near the bacterium to absorb these electrons, the research team of Wageningen UR, directed by Marjolein Helder, succeeded in producing electricity.
Although the microbial combustible batteries generate currently only 0,4 W/m ² of the growth of the plants, the researchers also say that the future systems could generate up to 3,2 W/m ², which would make it possible a roof of 100 m ² to feed in electricity a house with an average consumption of 2.800 kWh per annum.
The researchers think that these roofs energy producers green could become a reality in the next years, with an electrical production of large scales in the marshes around the world as from 2015. Technology functions with various types of plants including the graminaceous ones.
Cone solar spin V3
The V3 Spin Cell is the first in a series of solar technologies based on our patent pending technology. Approximately a meter high and a meter wide and producing 1kWp of electricity, the Sentinel can be used in a variety of configurations to suit solar farm deployments, industrial roof mounts or home installations.
For areas requiring greater power density, the Power Pole is the answer. This is a pole that holds 10 Spin Cells, or 10KWp, in a footprint of 13 m². The spin cells are placed with mathematical precision to make sure there is minimal shading. This not only creates significantly great power density, but also removes the concern of floods, while mitigating the environmental impact.
Light is transferred to electricity in nanoseconds. Heat is transferred in milliseconds (1,000,000X longer). The PV on the inner cone of the Spin Cell captures the light, generates the electrical energy and then spins away before the heat can be transferred to the PV.
The V3solar patent pending design also delivers a higher level of efficiency from PV. Tests to date have shown improvements under laboratory conditions of around 20%, which effectively lifts the efficiency of the PV from 20% to 24%.
A photovoltaic cell is an electronics component which, exposed to the light (photons), produced electricity. It is the photovoltaic effect which is at the origin of the phenomenon. The tension obtained is function of the incidental light. The photovoltaic cell delivers a continuous tension.
The photovoltaic cells most widespread consist of semiconductors, mainly containing silicon (Si) and more rarely of another semiconductors: séléniure of copper and indium (CuIn () 2 or CuInGa () 2), cadmium telluride (CdTe), etc They are generally appeared as fine plates of ten centimetres on side, sandwiched between two metal contacts, for a thickness about the millimetre.
The cells are often joined together in photovoltaic solar modules or solar panels, according to the required power.
The principle of the photoelectric effect (direct transformation of energy carried by the light into electricity) was applied since 1839 per Antoine Becquerel who noted that a chain of conducting elements of electricity gave rise to a spontaneous electric current when it was enlightened. Later, selenium then the silicon (which has finally for reasons of cost supplanted cadmium-tellurium or cadmium-indium-selenium also tested) were suited to the production of the first photovoltaic cells (photographic exposure meters for photography since 1914, then 40 years later (in 1954) for an electric production. Search relates also today to likely polymers and organic materials (possibly flexible) which could replace silicon.
Principle of operation
In a semiconductor exposed to the light, a photon of sufficient energy tears off an electron, creating with the passage a hole. Normally, the electron quickly finds a hole to be replaced, and the energy brought by the photon is thus dissipated. The principle of a photovoltaic cell is to force the electrons and the holes to move each one towards an opposite face of material instead of recombining simply in its center: thus, it will appear a potential difference and thus a tension between the two faces, as in a pile.
For that, one arranges oneself to create a permanent electric field by means of a junction PN, between two respectively doped layers P and NR:
Structure of a photovoltaic cell
The roadbase of the cell is made up of silicon doped NR. In this layer, it exists a quantity of free electrons higher than a layer of pure silicon, from where the name of doping NR, like negative (electron charge.). The material electrically remains neutral: it is the crystal lattice which supports a positive load overall.
The sub-base of the cell is made up of silicon doped P. This layer will thus have on average a quantity of free electrons lower than a layer of pure silicon, the electrons are related to the crystal lattice which, consequently, is positively charged. Electric conduction is ensured by holes, positive (P).
At the time of the creation of junction P-N, the free electrons of the area NR re-enter in the layer P and will recombine with the holes of the area P. It will exist thus, during all the life of the junction, a positive load of the area NR at the edge of the junction (because the electrons left there) and a negative charge in the area P at the edge of the junction (because the holes disappeared from it), the unit forms the Zone of space charge (ZCE) and there exists an electric field between the two, of NR towards P. This electric field makes ZCE a diode, which only allows the passage of the current in a direction: the electrons can pass from the area P towards the area NR, but not in opposite direction, conversely the holes pass only from NR towards P.
Under operation, when a photon tears off an electron with the matrix, creating a free electron and a hole, under the effect of this electric field they leave each one on the other hand: the electrons accumulate in the area NR (which becomes the negative pole), while the holes accumulate in the doped layer P (which becomes the positive pole). This phenomenon is more effective in the ZCE, where there are not practically more charge carriers (electrons or holes) since they were destroyed, or in the vicinity immediate of the ZCE: when a photon creates a pair electron-positron pair there, they separate and are likely little to meet their opposite, whereas if creation takes place far from the junction more, the electron (resp. the hole) new preserves a great chance to recombine before reaching the zone NR (resp. the zone P). But the ZCE is inevitably very thin, also is not it useful to give a great thickness to the cell.
All in all, a photovoltaic cell is the equivalent of a generator of current with which one associated a diode.
It is necessary to add electrical contacts (which let pass the light opposite enlightened: in practice, one uses a contact by a grid), an anti-reflecting layer to ensure a good absorption of the photons, etc
So that the cell functions, and produces the maximum of current, one adjusts the gap semiconductor on the energy level of the photons. One can possibly pile up the junctions, in order to as well as possible exploit the spectrum of energy of the photons, which gives the cells multi-junctions.
Technique of manufacture
Silicon is currently the material more used to manufacture the photovoltaic cells. One obtains it by reduction starting from the silica, made up most abundant in the earth crust and in particular in sand or quartz. The first stage is the production of silicon known as metallurgical, pure to 98% only, obtained starting from pieces of quartz coming from rollers or a vein deposit (the technique of industrial production does not make it possible to leave sand). The photovoltaic silicon of quality must be purified until more than 99,999%, which is obtained by transforming silicon into a chemical compound which will be distilled then retransformé out of silicon. Silicon is produced in the form of named bars ingots of round or square section. These ingots are then sawn in fine plates put at the square (if necessary) of 200 micrometers thickness which are called wafers. After a processing to enrich in doping elements (P, Aces, Sb or B) and thus to obtain semiconductor silicon of type P or NR, the wafers are metallized: metal ribbons are encrusted on the surface and are connected to electrical contacts. Once metallized the wafers became photovoltaic cells.
The production of the photovoltaic cells requires energy, and it is estimated that a photovoltaic module must function approximately two to three years following its technique of manufacture to produce the energy which was necessary to its manufacture (energy return of the module).
The techniques of manufacture and the characteristics of the principal types of cells are described in the three next paragraphs. There currently exist other types of cells being studied, but their use is practically negligible.
The materials and manufactoring processes are the subject of ambitious research programs to reduce the costs of ownership and of recycling of the photovoltaic cells. The techniques thin layers on standardized substrates seem to collect the votes of the emergent industry. In 2006 and 2007, the growth of the worldwide production of solar panels was slowed down for lack of silicon, and the prices of the cells did not drop as much as not hoped. Industry seeks cause a drop in the quantity of silicon used. The single-crystal cells passed from 300 microns from thickness to 200 and one now thinks of quickly reaching the 180 then 150 microns, decreasing the quantity of silicon and energy necessary, but also the prices.
Amorphous silicon cell
Silicon at the time of its transformation, produces a gas, which is projected on a glass leaf. The cell is gray very dark. It is the cell of the computers and the watches known as solar.
function with a weak or diffuse illumination (even in covered weather, including under illumination from 20 to 3000 luxes),
a little less expensive than the other techniques,
integration on flexible or rigid supports.
weak sun yield full, from 5% to 7%
need for covering surfaces more important than during the use of crystal silicon (smaller ratio Wc/m ², approximately 60 Wc/m ²)
performances which decrease with time in the first durations to the natural light (3-6 month), to stabilize itself then (- 10 to 20% according to the structure of the junction).
Cells with photosensitive pigment or Grätzel cells
ruthenium-for the third time pyridine like photosensitive pigment
Also called cells with photosensitive pigment, the Grätzel cells were named thus in reference to their originator, Michael Grätzel, of the federal Polytechnic school of Lausanne.
Principle of operation
It is about a photoelectrochemical system inspired of the vegetable photosynthesis made up of an electrolyte donor of electron under the effect of a pigment excited by the solar radiation. The electromotive force of this system comes from the speed with which the electrolyte compensates for the electron lost by the excited pigment before this last does not recombine: the photosensitive pigment is impregnated in a material semiconductor fixed at the transparent and conducting wall located vis-a-vis the sun, so that the electron released by the pigment diffuse to the conducting wall through the material semiconductor to come to accumulate in the higher wall of the cell and to generate a potential difference with the lower wall.
Operation of a Grätzel cell.
These devices are promising because they utilise cheap materials implemented with relatively simple technologies. The first cell with photosensitive pigment shown with FPSL in 1991 by Mr. Grätzel and B.O' Regan used:
An oxide wall higher of tin doped than the fluorine SnO2•F, which is an at the same time transparent and conducting material of electricity
On the interior face of this wall, oxide pulverulent titanium TiO2 semiconductor whose surface was impregnated of polypyridine to ruthenium like photosensitive pigment
An electrolyte iodises/triiodide (I–/I3–) bathing the whole by ensuring conduction with the lower wall of the cell, which completed.
They could be improved by Plasmoniques technologies
These cells reached in laboratory of the outputs of 11% but are produced commercially with outputs from 3 to 5%, at a cost of approximately 2.400 €/kWc.
Single-crystal silicon cell
During cooling, molten silicon is solidified by forming one crystal of great dimension. One cuts out then the crystal in fine sections which will give the cells. These cells are in general of a uniform blue.
good output, from 14% to 16%
good ratio Wc/m2 (~150 Wc/m ²) what allows a space saver so necessary,
high number of manufacturers.
Silicon multicristallin cell
A photovoltaic cell containing silicon multicristallin
During the cooling of silicon in an ingot mould, it is formed several crystals. The photovoltaic cell is of bluish aspect, but not uniform, one distinguishes from the reasons created by the various crystals.
square cell (with corners rounded in the case of the So single-crystal one) allowing a better expansion in a module,
good output of conversion, approximately 100 Wc/m ², but however a little less good than for the single-crystal one,
less expensive ingot to produce than the single-crystal one.
weak output under a weak illumination.
Polycrystalline or multicristallin, One will speak here about silicon multicristallin. The polycrystalline term is used for the layers deposited on a substrate (small grains).
Monolithic stacking of two simple cells. By combining two cells (amorphous thin layer of silicon on crystal silicon for example) absorbing in spectral fields overlapping, one improves the theoretical yield compared to distinct simple cells, which they are amorphous, crystal or microcrystalline.
sensitivity high on a broad beach wavelength. Excel output.
high cost due to the superposition of two cells.
Solar cell 3D
It was not enough to think of it. In the course of the day or of the movements of its support, a curved surface recovers solar energy best that a plan. Still is necessary it to find a means of creating structures of small sizes to the ideal forms. An American team has just made a success of it while making use of drops from water. For the moment, the lucky find does not leave the laboratory but it is promising.
To obtain the best performance of a photovoltaic cell, it should be directed perpendicular to the luminous radii. But the Sun moves (the astronomers have claimed for a few centuries that it is in fact the Earth which turns) and the solar panels are not always posed on a fixed structure. Some install some on back bags, sailing ships, cars, sailplanes, MICROLIGHTS and the team of the project Solar Impulse wants to even make of it the only energy source of a plane while others want to make these solar cells printable
On the theoretical level, a curved sensor should reach a better output. It is also a manner of increasing surface. If one carries out a half-sphere for example, instead of a plan, one doubles surface. But the photovoltaic cells are small sizes and made up of semiconductor materials which lend themselves the realization of flat surfaces well. To obtain curved structures is not easy especially if constraints of costs are given.
Carried out by Ralph Nuzzo, a team of the University of Illinois with Urbana-Champaign proposes a method of original and elegant manufacture by car-assembly, published in Pnas. These researchers make use of tiny silicon films which yield all alone around water drops to give various forms.
Stages of the car-assembly of a solar cell 3D (to click on the image to increase it). The circuit (a semiconductor with a positively doped part, p+, and a negatively doped part, n+) is engraved on a silicon wafer (ONESELF wafer), by the traditional techniques of lithogravure, with its electrical contacts (Cr ⁄ Au, chromium and gold). Very fine (here 2 microns), composing it obtained is released by a chemical attack (Etch undercuts), a traditional process him also, and a water drop is deposited. The origami is closed again then as envisaged, in a sphere. In bottom, the result. Electrodes are welded and the curves show the intensity obtained (milliampères ⁄ cm ²) according to the tension (voltage). The result is improved by the presence of reflectors, left reflective basin in which one deposits this sphere.
The team started from a silicon wafer and used the traditional techniques of lithogravure to engrave the components of a photovoltaic cell. Then remain to fold this origami tiny, of a few millimetres on side for a thickness of 1,25 to 2 microns. The idea is to use the forces of surface tension appearing between this film and a drop of water, the thin sheet tending to be rolled up around. To obtain a certain three-dimensional form, the researchers, at the very moment of engraving, drew the component according to a complex geometry, with the manner of a precut boarding which will make it possible to obtain an object, a box for example.
Once the component carried out on wafer, a drop of water is posed in the center and the miracle is achieved. As water evaporates, the silicon film, as a flower which would be closed again, is folded up to adopt a certain form.
In fact of miracle, most of the easy way consists in determining which cutting precisely to realize on wafer to obtain the desired form. The term of origami is not usurped and the team knew to develop a mathematical model to predict this car-folding by taking account of the thickness of film, of the characteristics of silicon and the forces induced by the surface tension during the evaporation of water.
This tool allows them from now on, as the photographs published in the article testify some, to obtain a range of varied forms. To stabilize their tiny structures, the researchers deposit a piece of glass in their centers, fixed by adésif.
Density of current, in milliamperes per cm2 according to the tension (voltage) obtained with a cylindirque cell with or without reflectors and with a cell punt manufactured in the same way.
Organic photovoltaic cell
The organic photovoltaic cells are photovoltaic cells whose at least active layer consists of organic molecules.
They constitute an attempt at cutting costs of photovoltaic electricity, without question the principal barrier of this technology, but one also hopes that they will be finer, flexible, easy and less expensive to produce, all being resistant. The organic photovoltaic cells profit indeed from the low costs of the organic semiconductors as well as many potential simplifications in the manufacturing process.
There exists about it mainly of three type:
Molecular organic photovoltaic cells
Organic photovoltaic polymer cells
Hybrid photovoltaic cells
One distinguishes in the semi organic conductors the polymers, which settle by centrifugation, from materials from small molecules deposited them by thermo-evaporation.
Using as substrate of the plastic (PEN, LDC), these cells offer the prospect for a production uninterrupted (roll-to-roll) which would give finally access to solar panels at a reasonable price.
Still at the stage of experimental search, the record of output with polymer solar cells into 2008 was included/understood between 4 and 5%; In 2008, in laboratory, world records of 5,9% were reached in July 2008 at the Institute of photovoltaic applied technical University of Dresden.
The photovoltaic cells containing organic molecules offer a still less low output than the polymer cells, but their performance was improved recently. In 2009, researchers of the French universities of Angers and Strasbourg succeeded in increasing their output by making it pass from 0,2% to 1,7%. In 2010, an American research team again improved this output while changing it to 2,5%.
Photovoltaic polymer cell.
The photovoltaic polymeric cells indicate a technique of organic solar cells producing electricity starting from the light using polymers semiconductors. It is about a relatively recent technique studied in laboratory by industrial groups and universities throughout the world.
Two links of PEDOT
Still largely at the experimental stage, the photovoltaic polymeric cells offer nevertheless interesting prospects. They rest on organic macromolecules derived from the petrochemistry, whose manufactoring processes are much less consumers of energy that those implemented for the cells containing mineral semiconductors. Their cost price is much weaker and they are lighter and less fragile. Their flexible nature makes them even ready to be integrated into flexible silicone or organic polymer materials, even with textile fibers. Their development can benefit from progress of chemical engineering, for example in the car-assembly of these molecules. Their principal weakness lies in their still limited lifespan induced by the degradation of polymers when they are exposed to the sun.
Principle of operation
Physics subjacent with the photovoltaic effect in the organic semiconductors is more complex to describe than that of the cells with mineral semiconductors. It utilizes different the orbital molecular ones, some playing the part of valence band, others of tape of conduction, between two distinct molecular species, one being used as donor of electrons and the other of acceptor, organized around a heterojunction like in the case of the mineral semiconductors:
The molecules being used as donors of electrons (by generation of let us excite, that is to say pairs electron-positron pair) are characterized by the presence of electrons p, generally in a combined polymer known as of type p
These electrons can be excited by photons visible or close to the visible spectrum, making them pass from orbital molecular high occupied (playing here a part similar to that of the valence band in an inorganic semiconductor) to orbital molecular low vacant (playing a part similar to that of the tape of conduction): it is what is called the transition p-p* (which corresponds, by continuing the analogy with the mineral semiconductors, with the injection of the carriers in the tape of conduction through the forbidden band). Energy necessary for this transition determines the maximum wavelength which can be converted into electrical energy by combined polymer.
Contrary to what occurs in an inorganic semiconductor, the pairs electron-positron pair, in an organic material, remain narrowly localized, with a strong coupling (and a binding energy ranging between 0,1 and 1,6 eV), the dissociation of excite is carried out with the interface with a material acceptor of electrons under the effect of a gradient of chemical potential at the origin of the electromotive force of the device. These acceptors of electrons are known as of type N.
The organic photovoltaic cells often use films in poly (ethylene naphtalate) (PEN) like protective coatings on the surface, whose essential function is to prevent the oxidation of materials constituting the organic photovoltaic cells: O2 is indeed an impurity which acts like a center of recombination electron-positron pair, degrading the electronic performances of the components. Under these protective coatings, one finds one or more junctions p-n between materials donors and acceptors of electrons, as in a traditional photovoltaic cell with mineral semiconductors.
An example of realization consists in inserting molecules of fullerene C60 like acceptors of electrons (type N) between polymer chains combined (such as the PEDOT: PS, formed of poly (3,4-ethylenedioxythiophene) (PEDOT) like donor of electrons (standard p) mixed with the poly (styrene sulphonate) (PS) ensuring its solubility).
Generally, current search goes for example on derivatives of polythiophenes like polymers of the type p, in particular the poly (3-hexylthiophene) (P3HT) with derivatives of fullerene like acceptors (standard N) such as [6,6] - methyl phényl-C61-butyrate (PCBM).
Other couples p ⁄ n are investigués, in particular containing para-phenylene-vinylene (PPV) like donors, such as MEH-PPV/PCBM or MDMO-PPV/PCBM, or at the same time like donors and acceptors, such as MDMO-PPV/PCNEPV
Cells having a great effectiveness were developed for space applications. The cells multi-junctions consist of several thin layers which use the epitaxy by molecular jet.
A cell triples junction, for example, is consisted of the GaAs semiconductors, Ge and GaInP2. Each type of semiconductor is characterized by a maximum wavelength beyond which it is unable to convert the photon into electrical energy. On another side, in on this side this wavelength, the surplus of energy conveyed by the photon is lost. From where interest to choose materials with lengths as close from/to each other as possible (by multiplying their number of as much) so that a majority of the solar spectrum is absorbed, which generates a maximum of electricity starting from solar flow. The cost of these cells is about USD 40 $/cm ².
Semi conductor cis
The technique consists in depositing a semiconductor material containing copper, of gallium, indium and selenium on a support.
A concern however: raw material resources. These novel methods use rare metals as the indium whose worldwide production is of 25 tons per annum and the price of April 2007 of 1000 dollars kg, the tellurium whose worldwide production is of 250 tons per annum, the gallium of a production of 55 tons per annum, the germanium of a production of 90 tons the year. Although the quantities of these raw materials necessary to the manufacture of the solar cells are infinitesimal, a world massive development of the photovoltaic solar panels in thin layers without silicon would not fail to encounter this limited physical availability.
photovoltaic cell new generation
A new technology of photovoltaic cells would make it possible to increase the output of solar energy considerably. A research national project on the photovoltaic cells should give place to the production of a third generation of photovoltaic cells.
These cells are based on a new technology, installation in particular by researchers of NTNU (university of Trondheim), university of Oslo, SINTEF and IFE (institute of technologies for energy).
It is the result of a vaster collaboration between these research institutes and universities and of the private partners (Elkem Solar, Fesil Sunergy, Hydro, Norsun, Prediktor, REC, Scatec, Solar Cell Repower, Umoe Solar) gathered within the Norwegian Center for search on the photovoltaic technology of cell (one of the research centres for the durable energies set up by the Norwegian Council for Search).
The current photovoltaic cells have an output from 16 to 18%. To the maximum they could reach an output of 29% (limit known as of Shockley-Queisser). The new cells which the Norwegian researchers develop could reach an output from 60 to 80%. That passes at the same time by the optimal use of the energy properties of the sunlight (by using all the spectrum of this light and not only one part of this one) and by the improvement of certain properties of the current cells photovoltaic.
When the sunlight arrives on a solid (of silica for the photovoltaic cells), the energy carried by the photons (particles of the light) is transmitted to the electrons of this solid. The electrons increase their own energy thus.
The theory of the tapes wants that in a solid, the various electrons present in the atoms have states of energy different from/to each other. However, these states of energy are not random: they are included/understood inside line-up tapes of energy (see illustration) and cannot have energy levels apart from these intervals.
Two tapes are particularly important: the valence band and the tape of conduction. The valence band corresponds to the last tape completely filled in electrons (one cannot add other electrons in this tape), the tape of conduction is that which follows (incomplete in electrons).
To pass from the one to the other, an electron needs to receive an amount of power higher than the interval between the two tapes. In this case, this energy is brought by the photons of the light. This difference in energy between the two tapes is called "band gap". Once it exceeded this "band gap", the electron of the tape of conduction is free to circulate in the solid (it is not related any more to its atom): it is the photovoltaic effect (or photoelectric).
Solar light is made up of several wavelengths: that means that the photons do not carry all the same amount of power. Some will have some sufficiently to make pass the "band gap" to an electron, others not. It is one of the reasons which limits the output of the traditional photovoltaic cells: many photons which arrive are not used for nothing.
To obtain a maximum of electricity to final, it is necessary that a maximum of electrons can pass from the valence band to the tape of conduction. In addition, certain photons bring an energy higher than that necessary to pass the band gap, the remainder is thus of lost energy (in the form of heat).
What the Norwegian researchers did again: to change the structure of the photovoltaic cell to add structures into nanocristaux with the Silica structure, thus allowing:
To convert the too energetic photons into two less energetic photons (but of always sufficient energy to pass the band gap): it is down-conversion. That makes it possible to make pass more than one electron on the tape of conduction with only one incidental photon.
To combine two photons of too weak energy to create one of sufficient level of them to exceed Gap Band: it is up-conversion.
By also producing cells which can have different band gap, one can think that would make it possible to convert the whole of the solar spectrum.
In addition, when the electron passes from the valence band to the tape of conduction, it leaves a "hole" in the valence band (it is not completely filled in electrons). In the case of the natural photovoltaic effect, the electron, arrived on the tape of conductance, finds another "hole quickly" and returns on the valence band. The energy brought initially by the photon is then lost. In the photovoltaic cells, the structure forces the free electrons to go contrary to the "holes", in order to create a potential difference, between the pole + (the hole) and the pole - (the electron), as in a pile.
More one maintains this potential difference a long time more one recovers electricity. It is necessary thus that the electron of the tape of conduction remains a long time without finding a "hole". With current technology, this time is about the nano one or of the microsecond. The materials which Norwegian is in the train working out would make it possible to prolong this duration until thousandths of second.
Output cell (in laboratory)
Modulate (in laboratory)
Level of development
Crystalline silicon in thin layer
Loan for the production
At the stage of research
Cell of Graetzel
At the stage of research
At the stage of research, production exclusively for space applications
*sous concentration of 236 suns ** Module triples GaInP ⁄ AsGa ⁄ G ⁄ Ge junction
Thermodynamic solar power station
A thermal solar power station (or thermodynamic or heliothermodynamic solar power station) is a power station which concentrates the radii of the sun using mirrors in order to heat a coolant which in general makes it possible to produce electricity.
Types and dies
central tower power plant
power station made up of parabolic sensors cylindro
power station made up of parabolic sensors
power station made up of sensors of Fresnel
A central tower power plant consists of a field of capteurs solaires special called heliostats which concentrate the radii of the sun. The mirrors are concentrated on tubes where a liquid coolant is carried to high temperature. This liquid coolant sent in a boiler transforms water into vapor. The vapor makes turn of the turbines which actuate alternators producing of electricity.
In this concept, the solar radii are not concentrated. The air is heated by a large surface of collectors (a kind of gigantic greenhouse) before escaping by a large central chimney, while passing by turbines located at its foot. In the project of Buronga (Australia), the chimney would reach 1000 m height, and surfaces it of collector, 7 km diameter.
This type of power station is composed of parallel alignments of long hemicylindrical mirrors, which turn around a horizontal axis to follow the race of the sun.
The solar radii are concentrated on a horizontal tube, where a coolant circulates which will be used to transport heat towards the power station itself.
The temperature of the fluid can go up until 500° C. This energy is transferred to a circuit from water, the vapor then produced actuates a turbine which produces electricity.
Certain power stations are from now on able to produce electricity uninterrupted, night and day, thanks to a storage system of heat.
The parabolic mirror reflects the radii of the sun towards a convergence point, the solar radiation is then concentrated on the receiver which goes up in temperature.
The receiver in question is a Stirling engine which functions thanks to the rise in temperature and pressure of a gas contained in a closed enclosure.
This engine converts thermal solar energy into mechanical energy and then into electricity.
Throughout the day, the base of the parabola is directed automatically vis-a-vis the sun to follow its race and thus to benefit from a maximum sunning.
The systems with reflectors parabolic can reach 1.000 °C on the receiver, and arrive to optimal outputs of conversion of solar energy into electricity by using a small amount of power.
The performance of the whole of the system is closely related on the optical quality of the parabola and the output of the Stirling engine.
Mirror of Fresnel
The principle of a concentrator of Fresnel lies in his plane mirrors (flat) known as linear compact reflectors. Each one of these mirrors can swivel while following the race of the sun to redirect and concentrate permanently the solar radii towards an absorbent tube.
A coolant is heated until 500° while circulating in this horizontal tube. This energy is transferred to a circuit from water, the vapor then produced actuates a turbine which produces electricity.
Main advantage of this technology, the flat mirrors are quite cheaper than the parabolic mirrors.
Link towards principal the manufacturer of wind mill
Many blades: harness (similar an engine of plane has)
kit wind generator
a wind mill in the shape of horse-gear
plusior swivelling arm to which are attache of the cables at the end of which this finds kites of which the height is controlled electronically and this finding with a height enters 150 and 700 meters
hovering wind turbine
The flying structure is sustained by lines small airships, carrying each one one group of four horizontal blades and a turbine (either 100 kg). Each one of these elements would produce 30 kw:
Each airship measures 5 meters in diameter for 8 meters length. It carries a wind mill of Darrieus type but horizontal.
magenn air power rotor
At the end of a cable, with 300 meters of altitude this tethered balloon inflated with helium is provided from four or five large blades. It behaves like a horizontal rotor turning with the wind. This rotation induced moreover one force of additional lift, it is the Magnus effect
Carried by its helium and its Magnus effect Mars, rigidified by an aluminum reinforcement rotate and floats between 150 and 300 meters
A generator fixed on the axis manufactures electricity. According to its originators March would start to function starting from a wind from only 1 meter a second (3,6 km/h)
windstalk, wind mill without blades
the Windstalk concept is composed of resin post out of carbon fiber reinforced, which is held to 55 meters in height and are anchored in the ground in concrete bases which make between 10 and 20 meters diameter The poles, which measure 30 cm in diameter at the base fraying until a diameter of 5 cm at the top are packed with a pile of ceramics discs piezoelectric. Between the discs are electrodes which are connected by cables who run along each pole
when this stem is balanced in the wind the piles of the piezoelectric discs are compressed generating a current in the electrodes. To return
Moreover to increase its energy production they also envisaged a generator of couple which is located in the concrete at the base of the posts who, when these chechmates are balanced, force a liquid a network of cylinders crossed. When the wind blows part of produced electricity is used for to feed a whole of pumps which moves the water of one compartment lower than a higher room. Then, when the wind falls, water goes down again in carrying out the opposite way, thus transforming pumps out of generators of electricity
One calls traditional Eolienne, the propeller wind mills which compose the parks, the farms or fields of wind mills.
Operation of the wind mills with horizontal axis with propellers
the wind mill monopale
the wind mills monopale are very rare, because so of expensive design moin, they have the defect to have an imbalance of with the contrepoid installed for counterbalanced the single blade, by aillor this imbalance involves a wear premature of the wind mill
two-bladed or three-bladed wind mill
A wind mill with horizontal axis with propellers has in general to 2 or 3 blades.
It makes it possible to produce electricity thanks to the following elements which compose it:
the mast, which makes it possible to place the wind mill at a height where the speed of the wind is higher, more regular and linear than on the ground
the blades, assembled on the axis of the rotor of the alternator are generally three. They are the element of socket to the wind of the machine
a nacelle assembled at the top of the mast and sheltering the electric, pneumatic and electronic components working with the conversion of the rotation movement of the rotor into electrical energy according to the principle of the dynamo or the alternator
a switchyard near the wind mills. There this station makes it possible to count the electric production of the park and to raise the tension to be connected to the network public of electricity existing, in order to be able to inject produced and not directly consumed energy.
The output and the choice of the site
The effectiveness of a wind mill depends mainly on its site. Indeed, the provided power increases with the cube the speed of the wind. A site with winds of approximately 30 km ⁄ h will produce 8 times more than another where the winds will reach only 15 km ⁄ h. In general, the wind mills turn only with speeds of winds higher than 11 km ⁄ h. By security measure, when the wind exceeds 90 km ⁄ h, the wind mill is shut down.
The wind potential of a site pre-is generally evaluated with the use of data weather in the vicinity. Nevertheless, for very been windy sites, indices such as the vegetation can also confirm the importance of the wind (trees curved by the winds). The study of the wind remains essential to the evaluation of electricity being able to be produced and a mast of measurement of more than 40m is often installed to evaluate the potential of the site finely (study carried out within the framework of projects subjected to permit building)
The output varies according to the speed of the wind and remains subjected to the law of Betz. The freight rate term evaluates the relationship between the real production of electricity and the possible maximum production (Power of the machine multiplied by the number of hours included/understood in one year). This report ⁄ ratio generally lies between 20% and 30% and the national average are located around the 25%.
Techniques of constructions
The traditional Wind mills are is in on ground in parks or industrial parks, to see farms. It can be also at sea.
Firm of wind mill on ground
The fields of wind mills can be integrated on an agricultural and ⁄ or industrial site, or on a specific site.
Firm of Offshore oil rig wind mill
This principle makes it possible to install the wind mills at sea not very deep, often near to a port. It is thus possible to expose them to stronger and almost constant winds.
Advantages and Disadvantages
Clean energy and nonpolluting
Renewable energy and nonfossil, participant in our energy independence, and available for eternity, contrary to the gas or the uranium which is in the course of exhaustion and completely imported. Renewable energies do not support the military conflicts for the accession with the remaining natural resources.
The price of renewable energies is in constant increase, contrary to fossil energies or nuclear, whose price increases following the exhaustion of the resources such as the uranium (of which the price was multiplied by 15 in 3 years). The cost of renewable energies is fixed and foreseeable.
The external costs of renewable energies are very weak.
When large parks of wind mills are installed on arable lands, only 2 p. 100 of the ground approximately is necessary for the wind mills. Remaining surface is available for the exploitation agricole one, the breeding and other uses.
The professional tax transferred by the operator of the wind farm to the commune is an additional financial resource, sometimes essential to the financial survival of often rural and poor communes.
the residents always do not accept the visual impact of the wind mills. The perceptible noise in the vicinity modifies the environment close to the machines. Electromagnetic interferences cause sometimes problems of receptions which are dealt with by the operators of the wind farm.
the reception of the Hertzian waves can sometimes be disturbed, which causes an image disturbed on the television receivers
the beaconing is obligatory as for any other construction great height. This beaconing is visible several away kilometers, in order to announce the position of the wind mills and to ensure the aeronautical safety
it can happen that certain birds enter in collision with blades, however, this risk is to be relativized because all the studies carried out on this subject show that these collisions are very rare (about 1 to 5 birds ⁄ wind ⁄ an) and wind mills located very well by the birds. Studies avifaunas are obligatory before construction and the LPO works on many wind projects in order to study the impact of the park before its realization
the wind mills can interfere with the weather or military radars, by constituting an obstacle with the wave propagation at low altitude giving a remote region in the data. Moreover, as the blades are in rotation, the radar notes their rate of travel which is indifférentiable of a target moving like the rain. This aspect is currently under studies in various European countries in order to echniquement solve these difficulties. In order to not cause any loss with the operators radars, the agreement of Meteofrance as well as civil aviation and the air force are for this obligatory reason in order to obtain the permit building of a wind farm
According to the anti-wind militants, the intermittent energy produced by the wind mill often requires the complementarity of a thermal energy production. Actually, it is the reverse which occurs, the wind production replaces in near total thermal energy. Moreover, if the production of a park is intermittent, the expansion of the whole of the French parks creates a relatively stable production, of share the dispersion of the parks and the various layers décorrélés of the French winds present on the French soil.
wind mill of savonius
The rotor of Savonius is a wind mill with vertical axis. She was invented by Finnish engineer Sigurd Savonius in 1924 and was patented in 1929.
The operation of the rotor of Savonius is based on an aerodynamic torque induced by the deflection of the flow on the blades.
The Savonius type, schematically made up of two or several semi-cylindrical cups slightly excentric presents a great number of advantages. In addition to its compactness, which makes it possible to integrate the wind mill into the buildings without denaturing esthetics of it, it is low noise. It starts at low speeds of wind and presents a high torque though varying in a sinusoidal way during rotation. The important increase in the mass according to dimension returns the wind mill of the Savonius type little adapted to the mass production in a park with wind mills.
Principle of the rotor of Savonius
The original model was designed with a spacing between the blades such as e ⁄ D = 1 ⁄ 3
Where D represents the diameter of the rotor. But it presents better performances for a geometry such as e ⁄ D = 1 ⁄ 6
One uses the Savonius wind mill for applications where one attaches more importance to the cost or reliability that with the output. For example, the majority of the anemometers are of Savonius, because the output does not play any part.
The most common application is the Flettner ventilator, developed by aeronautical engineer German Anton Flettner in the years 1920. Placed on the roofs of the buses and the light vans, it ensures distribution and cooling of it. This ventilator uses a Savonius rotor. It is still manufactured by the company Flettner Ventilator Limited.
Small Savonius rotors are sometimes used with advertizing objectives, rotation drawing the attention on the object of publicity. The blades are sometimes decorated with an animation with two images.
Advantages and disadvantages of the Savonius wind mill
Not very cumbersome
Not very noisy
Start at low speeds of wind
High torque with starting
No constraints on the direction of the wind
No constraints on the direction of the wind
vertical wind mill of darrius
This type of solution reduces considerably the noise while allowing operation with winds higher than 220 km/h and whatever their direction
The principal defect of this type of wind mill is their difficult starting, indeed the weight of the rotor weighs on its base, generating frictions
One distinguishes several variations around this principle, since the simple cylindrical rotor - two profiles laid out on both sides of the axis - to the parabolic rotor where the profiles are bent in troposkine and are fixed at the node and the base of the vertical axis
Generator being able placed at the ground (according to the models)
Less obstruction than a "conventional" wind mill
Integrable with the building
wind mill propeller
Turby wind mill with a power of 2,5kW. It measures 3m top and 2m broad. It can be positioned with 5m top on the roof of a building and its form makes that it can use winds coming from many direction, of 4m ⁄ s with 55m ⁄ s - its rated power being at 14m ⁄ s
wind mill vertical rotor
other models of wind mill exploiting the rotor of the darrieus type
wind mill windside
The vertical wind mill with axis is resulting from naval engineering, the rotor is put in rotation by 2 paddles in the shape of spiral. The first tests made in the south of Finland on ground as on sea and out of blower of laboratory, During these 19 years of development and research tasks, windside tested systems of battery charger of various types and effectively continues work of development of its own pallet of products in direction of greater models.
The structures of the Windside wind mills are variable. These variations are mentioned by the alphabetical index at the end of the denomination of the model, for example A, B, C. These indices indicate for which application and under which circumstances the model in question at conceived summer. The turbines with - has - resist a constant wind of storm of 60 m ⁄ s, B with a wind of 40 m ⁄ s and - C - to 30 m ⁄ s; while producing energy.
All the Windside wind mills were built to resist the storms, freezing, the ice, heat and moisture. The Windside turbines starts the load of batteries to very weak winds. Larger the machine, earlier starts the load of batteries. The speed of starting of load for the WS-0,15 is of 4 m ⁄ s, for the WS-0,30 of 3 m ⁄ s and largest of 2 m ⁄ s. The loading will continue even at lower speeds with the larger models and this until a speed of 1 m ⁄ s.
The largest blades of Windside wind mills in series production have a diameter of 1 meter and a height of 4 Mr. basing himself on the studies of the Technical University, the turbine can be dimensioned on the effect of model without changing the geometry it. In this manner the Windside wind mill can reach 200 m height and a diameter of 70 m, which would correspond to an installation of several MW.
Effectiveness wind mills WS were tested and rewarded. The measurements taken in the Archipelago of Finland one proven that wind mill WS produces 50% per annum more electricity than a conventional propeller wind mill having an identical swept surface. Universally, the moderate wind prevail is of approximately 3 m ⁄ s. The special construction of paddles WS makes it possible to use winds 1-3 m ⁄ s, speeds insufficient for the others. Wind mill WS also functions by storm, tested at 60 m ⁄ s, a fatal speed for others. These 2 top-facts WS are world records.
In addition to the speed of the wind, turbulences and the direction of the wind affect the electrical production of the wind mills. Wind mill WS uses the winds of any direction, even turbulent, contrary to the conventional models. Spiral paddles WS always collect the wind with the favourable angle. Harmonize wind mill WS is in harmony with nature and the human environment. The cut in spiral as well as the disk speed not exceeding the speed of the wind, makes it completely quiet.
There are no ejected blocks of ice, escapes of oil or blades sharp. This installation is sure for people, the animals and nature. The splendid drawing and the slow rotation of wind mill WS resemble a work of Article just like Several artists of the architects and town planners raised it and the result can already be admired on ecological buildings.
horizontal wind mill darrieus
This type of wind mill is anything else only one wind mill of darrieus on a plan horizontal, these advantages are the facility of installation, the little of obstruction and the lightness of the infrastructure as well as the little of impacts visual, the disadvantages it is it not possibility of directed in the direction of the wind what according to the places offers poor yield
wind mill has aerofoil
The wind mills with aerofoil turning are characterized by the dynamic optimization of the chock of the blades in real-time, those behave same manner as the sail of a sailing ship which would make a circle in water with a given wind.
The blades thus accurately reproduce all the paces of a sailing ship steering their tangential course (angle) compared to the direction of the wind. It results from it that the tangential push on the arms of the rotor supporting the blades is always optimized.
wind mill aerocube
AeroCube uses a rotor adapted to its use in urban environment, although horizontally placed on the ridge tile of the roof, its rotor is with vertical axis and allows him to obtain performances raised even when the disk speed is low.
The end of the blades of the rotor moving at a speed very close to that of the wind, the operation of AeroCube remains quiet and does not involve a vibration.
Lastly, the rotor being laid out inside the box of AeroCube, completely sheltered solar radiation, it creates any stroboscopic effect nor no shadow.
wind mill windspire
Windspire is a wind mill with low-size vertical axis designed to provide energy to the private individuals and to the companies.
Of a 9,1 m height only, Windspire includes/understands a device which allows a direct connection the power supply of the building, compensating for electricity consumption and reducing the costs in energy.
Its hurled design, its quiet operation and its capacity to use turbulences to generate energy enable him to be integrated perfectly in urban and suburban environments.
The Windspire wind mill will generate approximately 2 000 kWh per annum with moderate winds of 5,4 m ⁄ s, that is to say the equivalent of more than half of electricity consumption of the average European hearth.
The Windspire wind mill is an economic, quiet and esthetic apparatus wind being able to be used in urban, suburban, rural or isolated environments. Manufactured in the United States by Mariah Power in an old automobile factory, the Windspire wind mills rest on a design with vertical axis and without propeller which integrates a patented technology maximizing the conversion of wind power into electrical energy, independently of the gearshifts and direction of the wind.
The wallet of Windspire includes ⁄ understands a version of 1,2 kw, a special version high wind conceived to resist winds going up to 270 km ⁄ h (168 mph) and finally a converter of 230 V for the international markets.
wind mill caphorn
The careenage, it is the originality of this aerogenerator of a new type, four year old fruit of development. The profile of this circular structure, calculated carefully, makes it possible to increase the air flow which puts the blades in rotation. According to its designers, the intérèt is triple.
First of all, Cape Horn 10 is satisfied with a rotor 4,4 m in diameter - 5,3 m with the careenage - to reach a power of 12 kw, when one needs a rotor 7 m in diameter for a traditional wind mill, explains Jean-Charles Poullain, leader of Quoted.
Second strong point, noise reduction. By removing the whistles in blade tip, the careenage lowers the noise level considerably, of the opinion of several informed witnesses. Lastly, third advantage, the birds encircle best the wind mill and circumvent systematically it: gulls came same to perch themselves on the careenage of our testing machine, in Cancale! Many tests in the blower of the 3Ecole Nationale Sup3erieure of Arts and Trades made it possible to reach a reactivity power coefficient of 0,72  when a traditional wind mill displays 0,42.
This very interesting performance results from the good adequacy between the form from the blades and that of the careenage
Quoted has other projects in sight: in addition to a variation towards small sizes of 2 kw and 500 W, the manufacturer seeks to finance the development of a ducted aerogenerator of 600 kw. Diameter of the rotor: 25 m, 30 m with the careenage, against 43 to 45 m for a traditional wind mill of this power.
spherical wind mill
Energy Ball channels the air through its six blades and thus supplies the generator. This spherical wind mill would have properties able to increase its performances of rotation while lowering the noise level, which would make it ideal for the energy production to small scales.
The spherical wind mill can be installed on the roof of the individual habitats, the rural companies and why not of the public body. According to the designers, the apparatus can reach up to six times the speed of the wind.
it becomes productive with a speed of wind lain between 3 m ⁄ s and 40 m ⁄ s.
wind mill multirotor
Company SELSAM, with designed a wind mill comprising 7 rotors of 3 blades, of a diameter of 2,1m providing a power of 6 kw.
The tree (the stem on which the rotors are placed) is not parallel on the ground, but leaning, in order to avoid the disturbances of the air flow of a rotor on another
wind mill to eliogir
Particularly adapted to the zones with turbulent winds. Ideal for urban environments. Easy to integrate in your architectural environment.
Diameter of 8 m, a height of blades of 3 Mr. a diameter to combine a disk speed reduced and a very important couple.
Direct and lossless transmission of the couple towards the generator. A production going up to 175 Méga Watts hour per year
wind mill aerocap
The Aerocap concept is resulting from a principle innovating from recuperation of energy using amplification the ambient speed of the wind (Venturi effect) and a module owner of energy transformation (developed by Windcap). The factor of amplification the speed of the wind is from 1,5 to 1,7
This application allows, equal produced energy, to have smaller and more robust propellers, working on a wider range speeds. The assembly of several superimposed modules and/or turns in parallels makes it possible to achieve the goals of desired powers.
wind mill stormblade
Stormblade Harnesses, created by Victor Jovanovic, founder of the company, has a design similar to that of an engine (an engine can indicate:) of plane (a plane, according to the official definition of the International Civil Aviation Organization (ICAO), is an aircraft): the blades are protected by a careenage which directs flow (the word flow (of Latin fluxus, flow) indicates a whole of elements in general (information ⁄ given, energy,) of air inside this turbine like a tube of admission. However this careenage is exposed to the wind and the high disk speeds of air flow crossing it and can thus develop an effect "parachute". The mast (the mast is a vertical espar (put aside the bowsprit) being used to support the veils on a bateau à voiles. In manner) wind mill undergoes extreme constraints then. The mast must then be constant by a reinforced scaffolding, which will require more surface (There exist many meanings with the word surfaces, sometimes geometrical object, sometimes physical border, often) on the ground and to increase the cost of the system
The principal innovation of the system thus relates to the rotor part which is founded on the turbine of a reaction engine. According to Mr. Jovanovic: "the reaction engine evolved ⁄ moved during the 50 last years in order to produce less trail, making it possible the bladings more quickly to turn". The aerodynamics (aerodynamics is a branch of the dynamics of the fluids which relates to comprehension and analyzes flows) system is thus improved what makes it possible to decrease and to increase the disk speed of the rotor without having to undergo phenomenon of gyroscopic precession. The effectiveness of Stormblade Turbine should be 70% compared with 30-40% for the current models with three blades, which means that this design exceeds the limit of Beltz (59%) of maximum effectiveness for the wind mills. Jonanovic estimates that its wind mill will be able to produce electricity for a speed of the wind lain between 3 m ⁄ s (11 km ⁄ h) and 54 m ⁄ s (193 km ⁄ h), thus doubling the range (the geomorphology defines a range as a "accumulation on the going material seaside of a size) of speed usable.
MagLev wind Turbine Technologies, the wind of the future
The vertical axis wind MAGLEV, or magnetic levitation for substentation magnetic, has been developed and produced by the guangzhou Institute of Energy conversion and under the supervision of the Chinese Academy of science.
REGENEDYNE The company decided to further work in Chinese cooperation with them and the latest innovations in germany.
The first products the company will provide wind turbines floating on a cushion of magnetic field with a capacity of 10 megawatts to 100 megawatts.
This system will include the latest technology of magnetic substentation which is currently used in high speed rail in Shanghai, China and Lathen in germany.
REGENEDYNE technology consists of a small unit that produces enough wind power to supply 1,000 homes at a cost less than $ 0.01 per kilowatt hour (kWh). This wind turbine incorporates adjustable contoured surfaces that capture more wind contrasting with the blades of horizontal wind that deflect the wind. The unit MagLev is assisted by a linear synchronous motor (LSM) and the proximity of turbines to rotate without friction. This technology delivers a maximum amount of energy well above the traditional wind-up of direct provision of electricity producers.
wind turbines of 10 MW to 2000 MW set the standard in the market and the most powerful unit will power alone more than 750,000 homes. The turbine REGENEDYNE is the first to offer the benefits of MagLev wind éliminants using permanent magnets and friction by floating the rotor above its base.
This product greatly improves the wind power capacity of over 80% compared to traditional windmills and can reduce costs of a wind farm of about 50%.
Other benefits, using less space for the same brightness, starts with a wind of 1.5 m ⁄ s and by design to less impact on birds and bats.
Moving from theory to practice, the company off Grid Technologies Initiates vertical axis wind farm on Lake Michigan near the city of Evanston, which should be realized during 2012. The link to the file
The project with a capacity of 200 Mw, provided by the installation of 20 vertical axis turbines on offshore stations will ensure that this technique works well and is profitable financially. Industry groups like Siemens would be in line to use this technology in Europe.
Classification of the wind mills
inspired by the book of Guy CUNTY
Orientation of the axis of rotation
Displacement of the blades
Device of orientation
U ⁄ V (2)
realization or in project (in kw)
Perpendicular with the wind
American mill MULTIPALES
1 to 2
0,5 to 50
simple very widespread raised starting torque (pumping)
Three-bladed or two-bladed
according to so with the wind or under the wind
2 to 12
0,05 to 1200
relatively simple better output compared to the well-known limit of Betz
horizontal or vertical
wind stopping or travelator (3)
0,5 to 1
2 to 3
oscillating profile (3)
complicated mechanism but good output
parallel with the wind
rotor of DARRIEUS (5)
5 to 8
5 to 5000
do not only start, good relatively simple output, should develop
rotor of SAVONIUS
0,5 to 1,7
0,02 to 1,7
very old, very simple system
0,3 to 0,6
0,001 to 0,05
very simple does not provide any power
with valves Chinese (4)
0,2 to 0,6
0,05 to 3
system very old, simple, goes up the wind many shocks, noise, unbalance
rotor with transverse flow harnesses multipale (3)
0,3 to 0,4
0,5 to 5
enough simple high socket with the wind
with cyclic orientation of the blades (3)
0,3 to 0,5
0,5 to 8
with screen (3)
0,2 to 0,6
0,5 to 5
heavy, high socket with the wind
Standard wind mill invelox
Daryoush Allaei, old enquiring for the Department of Defense and energy American and founder of Sheerwind took the exact opposite course to it. Its idea : to place the turbine in tunnels underground. Captured on the surface, the air accelerates in a bottleneck. It actuates a turbine then generating the electric current. A stable current: intelligent vents, which open and are closed along piping indeed make it possible to control the aspiration of air.
According to Sheerwind, Invelox presents two assets. The first lies in its architecture. Simpler, presenting less moving parts, it requires little maintenance where the turbulence of the external winds, which abuses pale and rotors, constitutes the main cause of drift of the costs in the exploitation of the wind farms. Second advantage: the stability and the modularity of the electric production make a source easier of it to inject into the network.
Validation of technology
At power equal to that of a traditional wind mill, Sheerwood thus hopes to divide the height of its installation by two, and its turbine by four cuts it... At a cost to lower kw between 19 and 36%
INVELOX technology was examined and validated by a technical advisory committee, a team of experts of large universities of research and organizations. Prototypes were tested under controlled conditions of laboratory and the results of the tests were used to build and validate the scale of calculation of dynamics of the fluids.
The first small unit was installed in a field close SheerWind installation in Chaska, Minnesota. The unit includes ⁄ understands the instruments for the speed and the power of data acquisition. Given speed preliminaries made it possible to validate the predictions of the model of calculation of dynamics of the fluids.
marine energies are the subject of many search. The sea is indeed a medium rich in radiant energy fluxes which can be exploited in the forms of osmotic energy. The latter is based on the natural phenomenon of osmosis which indicates the flow of a liquid little concentrated towards a liquid more concentrated through a semipermeable membrane.
Applied to sea water under pressure, this principle can make it possible to produce energy. Indeed, when fresh water is separated from sea water under pressure by a semipermeable membrane, it naturally will pass in the sea water compartment and will increase the pressure which can then be used to make turn a turbine and produce electricity.
After more than 20 years of search on the membranes and a first pilot site in Norway, baptized technology Pressure Retarded Osmosis (PRO) finally will be tested life size in a forthcoming factory of electrical production.
Located at the level of an estuary, the factory will be able to feed at the same time out of sea water and fresh water. Two water flows, one of filtered and pressurized sea water (11-15 bars) and one of fresh water taken in the river and filtered, will be introduced into the modules containing the membrane.
80 to 90% of fresh water will pass in the salt water compartment and will increase the pressure and the flow of sea water. Approximately a third of this water will supply the turbine, two thirds remaining will turn over to the exchanger of pressure to pressurize entering sea water. Brackish water will be reinjected in the estuary.
The company developed a technology of hydroliennes which draws its power from the swell of the waves. Each element is composed of a floating ball and of an actuator of the type jack doubles action.
The elements are assembled by file to increase the pressure and the multiplication of the files increases the aggregate rate of production.
Carnegie Wave Energy Limited (ASX: CWE) concentrated on the development and the marketing of its technology of energy production starting from the vague CETO, able to produce an energy with emission zero of gas with greenhouse effect as well as water freed of salt.
CETO differs overall from other technologies starting from the waves under development by the fact that it is completely submerged and that it generates Onshore energy rather than Offshore
Thermoelectric waves propelled by chemical reaction
A team of scientists of MIT discovered a phenomenon previously unknown which makes it possible to transport powerful waves of energy along the carbon nanotubes. This discovery could lead to a new way of producing electricity, say the researchers.
The phenomenon, described like a thermoelectric wave, opens a new field of search on energy, which is rare, known as Michael Strano, principal author of an article describing this new discovery in the Materials nature magazine. The principal author is Wonjoon Choi, studying with the doctorate in mechanical engineering.
A carbon nanotube can very quickly produce a wave of power when it is covered by a layer with fuel and is lit, so that heat moves along the tube. Like a collection of wrecks propelled on the surface of the ocean by travelling waves, it proves that a thermal wave travelling along a microscopic wire can transport the electrons, thus creating an electric current.
The key ingredient of the receipt is carbon the nanotube, a submicroscopic hollow tube made up of a netting out of lattice of carbon atoms. These tubes of some nanometers of diameter amongst other things belong to a new family of carbon molecules including buckminsterfullerene and the sheets of graphene.
During experiments, the nanotubes electrically and thermically conductors were covered with a layer of reactive fuel being able to produce heat while breaking up. This fuel was then ignited at an end of the nanotube by using a laser beam or a flash with high voltage. The result was a thermal wave being propagated quickly along the length of the carbon nanotube. The heat released by the decomposition of the fuel enters the nanotube, where it moves thousands of times more quickly than in fuel itself.
As heat is transmitted to the combustible coating, a thermal wave is created and guided along the nanotube. With a temperature of 3000 Kelvins, this ring heat moves along the nanotube 10.000 times more quickly than the normal propagation of such a chemical reaction. It is that the heating produced by this combustion also pushes the electrons along the tube, thus creating an important electric current.
The waves of combustion - as these impulses of heat descending along the nanotubes have been studied mathematically for more than 100 years, explains Strano, but it was the first to predict that these waves could be guided by a nanotube or nanofil and that this heatwave could push an electric current along this wire. In the initial experiments of the group, those were really surprised by the size of the resulting peak of tension which was propagated along the wire.
The amount of power released in this process much larger than is envisaged by traditional calculations in the field of the thermo-electric waves. That was already noticed by the experts in initial experiments. After having lit fuel the coating in the carbon nanotubes, the engineers, astonished by size of the nozzle of resulting electric tension its efforts redoubled to know with the detail and to optimize this new phenomenon.
A hydrolienne is an underwater turbine (or subaqueous, or posed on water and with half-immersed) which uses the kinetic energy of the marine currents or cours water, as a wind mill uses the kinetic energy of the air.
The turbine of the hydrolienne allows the hydraulic energy conversion into mechanical energy, which is then transformed into electrical energy by an alternator.
The kinetic power of a fluid crossing a disc of section is:
Pcin = 1 ⁄ 2 * (p*S*V³) out of W, with:
p : density of the fluid (fresh water 1000 kg per m ³, sea water 1025 kg by ³)
V : speed of the fluid in m ⁄ s.
Still more strictly than for a wind mill, the incompressibility of the fluid imposes that the product the speed V by the section S of the vein of fluid which will cross or crossed the disc is constant. In front of the disc of the hydrolienne, the fluid is slowed down and the vein widens. On the level of the hydrolienne, the change of section is negligible and thus (paradoxically) the speed of the fluid is constant. After the disc, the fluid is still slowed down and the vein still widens.
An elementary model of operation of the propellers, due to Rankine and Froude makes it possible to evaluate the fraction of the recoverable kinetic power by means of a disc perpendicular to a fluid moving. It is the limit of Betz, equalizes with 16 ⁄ 27 = 59%. This limit can be exceeded if the current of fluid is forced in a variable vein of section instead of circulating freely around the propeller.
Compared to a wind mill, the hydroliennes benefit from the density of water, 832 times higher than that of the air (approximately 1,23 kg per m ³ with 15°C. In spite of a speed of fluid in general lower, the recoverable power per unit of area of propeller is much larger for a hydrolienne than for a wind mill.
The hydroliennes are much smaller than the wind mills for the same power, that being due to the density of water approximately 800 times higher than that of the air.
The marine currents are foreseeable (in particular by consulting the éphémérides), one can thus estimate with precision the electrical production.
The potentials of the marine currents are very important.
The hydrolienne uses an renewable energy (the marine current) and it does not pollute, in terms of waste resulting from combustion such as CO ² or radioactive waste.
New semi-immersed models of hydroliennes can be adapted to the rivers, even modest, without having the ecological impacts of the traditional turbines whose fishermen fear that they have impacts underestimated on fish. These hydroliennes produces less electricity than the traditional turbines, but could be much lighter, and require much less investment.
The hydroliennes create areas of turbulence, which modify sedimentation and the current, with possible effects on the flora and fauna right downstream from their positioning. These aspects are analyzed by the etudes impacts.
Fish or marine mammals can run up against the propellers. These last can nevertheless turn very slowly (that depends on the resistance opposed by the alternator and thus on the model of hydrolienne).
In water turbides, because of presence of sand in suspension, the erosion of the blades of propeller or the moving parts by sand is very strong. Thus maintenance must be very frequent, but it is more difficult than with the free air since one cannot open it without water not penetrating inside and does not damage all the systems (mechanics and electric). For this reason, some hydroliennes have a structure emerging of the water, which can be awkward for navigation. Systems with ballast could make it possible to reduce to assemble or the production units.
To avoid the development of the algae and organizations encrusting on the hydrolienne, a antifouling should be used. This antifouling must be remade regularly, which induces important maintenance costs (intervention at sea,).
They are very expensive maintenance and the installation.
The potential impacts of these sensors are still known little about, and worry in particular the fishermen who work in the zones of interest, according to certain assumptions, the turbines would create zones of turbulence, preventing the deposits of sediment and thus the development of the flora, and creating thus in the long run a dead zone. These turbulences could also suspend more nutrients and support the plankton which nourishes certain fish. The hydroliennes could also disturb some marine animals which, too curious, would have approached too much.
The collecting of the energy of the currents slows down the speed of the fluid in the axis of the turbine what causes a light acceleration of the currents of skirting, this phenomenon meets when water passes along a rock: the fish avoid the obstacles while following the lines stronger speeds or use the counter-currents of turbulences.
In addition, the mode of rotation of the rotor is limited by speed in blade tip because of the phenomenon of cavitation.
Thus, large the hydroliennes will turn only at the rate of 10 to 20 turns per minute and their effects would be limited to turbulences downstream from the hydrolienne. The sediments would not settle around the hydrolienne, which would avoid the silting
Moreover, one disk speed sufficiently low would not disturb fish.
It as should be considered as the preferential sites for the installation of hydroliennes are sites of high tension currents to very strong (more than 3 m ⁄ s), where the conditions are unfavorable to the development of a fauna and a sedentary and fixed flora.
The European potential of energy hydrolienne is, according to several undertaken studies a few years ago centered on this project of world scale, to approximately 12,5 GW which could produce 48 TWh annual, which represents the capacity of three recent powerplants.
The marine currents could be exploitable everywhere in the world, the currents of tide constitute however for the moment the preferential field of this type of technology: the currents of tide present indeed, compared to the general currents (like the Gulf Stream), of the particularly favorable characteristics:
important intensity (in certain zones the currents of tide can reach or exceed 10 nodes, that is to say 5 m ⁄ s, whereas the general currents seldom exceed 2 nodes)
proximity of the coast: the veins of intense current appear in zones low depths located near the coast, which facilitates the exploitation of it
stable direction: the currents of tide are generally alternate, which simplifies the device of collecting
finally, predictibility: the currents of tide are perfectly foreseeable, since they depend only on the relative position of the generating stars - the Moon and Sun - and of local topography.
The Searev used to generate electricity through the waves but it is still a project of the CNRS and the ecole centrale de Nantes and was born of the brain of the engineer Alain Clement. Its name means Searev: autonomous electrical system for recovering energy from waves.
Searev not leave out of the water than a third of its 15m high, the equivalent of a small five-story building. 25m long it should weigh 1,000 tons at a minimum, half of which housed the huge wheel at the bottom of the pendulum system. It was through her that the Searev produce about 500 kilowatts of electricity enough to power 200 foyers.1km ² Searev feed more than 8,000 homes with electricity, the equivalent of a town of 20,000 inhabitants.
gas in it continuously maintains high pressure oil in storage tanks located just below. Thus, the hydraulic motor is powered continuously by oil pressure, which ensures proper operation.
Its operation is controlled by computer. Connected to motion sensors distributed on the Searev, the computer detects any oscillation of the float and wheel pendulum. It actuates the brake disc when the wheel pendulum reaches its highest point in each rotation. The wheel is then blocked a split second before returning in the opposite direction. Purpose of the maneuver: to maximize the relative rotational movement between the float and wheel pendulum to get the maximum energy of the waves of the undulations.
Searev Two elements of the float and wheel pendulum, are the source of the electricity produced by the machine. This is indeed the relative displacement of one relative to another, under the pressure waves, which can turn an alternator.
A wave tipped the float Searev.
This movement causes the back wheel rotation pendulum. Driven by its weight, it oscillates inside the float. In so doing it operates, through several gears, connecting rods which are in turn moving the two pistons. the piston in its cylinder mounts left while the right one down.
In amount, the piston left ejects oil under pressure to the storage tanks and to the hydraulic motor of Searev. It uses this pressure to turn a tree at high speed. This piece metallic drives a generator that produces electricity. After having passed through the hydraulic motor, the oil is released into a low pressure tank for reuse in a new cycle. While his colleague to the left rises, the piston descends on the right, freeing space in its cylinder. This movement draws oil from the tank. This oil will be fed back into the engine at the next swing of the pendulum wheel. It is then still: it was blocked at its maximum height by a disc brake controlled by computer.
The Searev is at the top of the wave. The float has recovered. The computer, using sensors, detects the movement and soon released the wheel brake which impeded. It then oscillates in the opposite direction this time causing the piston descends left and the rise of the piston right.
It is the piston right now that injects the pressurized oil in the hydraulic motor while the piston is filled with oil left from the tank. The wheel swings to its highest point where it will again be determined by the disc brake. She will then return in the opposite direction with the next wave, and so on.
the force houlomotrie to produce electricity is relatively simple, the pélamis is to install in open sea, it is to compose of 4 elements to bind the ones to the others, it can this drive horiontalement and vertically, each oscillation of oil has is pressurized in accumulators which drives a hydraulic engine which involves with are tower the generator
A new technology, baptized Vivace would make it possible to generate renewable energy starting from the rivers and of the weak oceanographical currents. The purpose of this novel method is to reproduce the behavior of fish, by using the turbulences generated by an obstacle.
Vivace, whose experimental prototype consists of a pipe horizontally laid out in the bed of a river and fixed on the ground by means of two vertical tripods, aims to benefit of turbulences naturally created along the banks or downstream from bridges.
Any movement of the pipe induced by the meeting of a produced obstacle of the mechanical energy which can be converted into electricity. If the output of this conversion were estimated at 22 percent, it would seem however that the researchers did not push the experiment until the electrical production.
Vivace is thus the first technical system allowing to generate energy starting from currents of which speed borders 3,2 km ⁄ h. Whereas the majority of the marine currents have speeds lower than 4,8 km ⁄ h, the turbines currently used for the hydraulic energy production require an high speed to 8 km ⁄ h.
If the purpose of preceding research in hydraulics led to the university of Michigan were to prevent the formation of swirls, the engineers of the department of naval architecture and marine engineering direct themselves now towards the comprehension and the amplification of such hydraulic phenomena.
Michael Bernitsas, at the origin of the creation of Vivace thus claims to have found a hydraulic power renewable whose environmental impact would be tiny since the system does not generate fast movements.
An industrial use of this technology is in addition considered on the river of Strait. This pilot project, called to proceed in the 18 next months, will consist in setting up of many cylinders left again at regular interval on the river bed. According to work of the researchers, the spacing of the various prototypes will have to be equivalent to four times their diameter to maximize the formation of turbulences.
BioWave: Inspired of the sea plants
Inspired of the movement of the sea plants under the action of the waves, BioWave makes it possible to recover until 250kW electricity.
BioWave consists of a structure which oscillates under the effect of the waves. It integrates an autonomous module (O-Drive) able to convert the induced kinetic forces into electricity, before being injected with the network through an underwater cable. This technology is conceived to function with underwater depths ranging between 30 and 50 meters.
The BioWave device differs from other technologies of the waves.
Firstly, it is conceived to generate electricity compatible with the network and can thus be connected to the coast only by the means of an underwater cable. That offers a certain flexibility in terms of site of the power station, as well as the access to more energy resources, while transmitting it in an optimal way towards the ground network high voltage.
Secondly, in the event of vague extremes, the device automatically will adopt a sedentary position while lying down on the ocean floor, which reduces the requirements of structural design and the costs without to sacrifice the reliability part thereafter.
Thirdly, the design, which uses a structure patented multi-blade, should capture a greater proportion of the supplied energy compared to the other models.
We think that BioWave, when it is marketed, will generate electricity at very competitive prices compared with wind energy. It will be closer to the characteristics of the price bases than those of solar energy and wind mill, explained Dr. Finnigan, chairman of BioPower Systems
Twelve other organizations began to contribute to the development of BioWave in order to support the pilot project over one 4 years period envisaged.
BioStream: Inspired of the shape of the tails of sharks
BioStream makes it possible to produce electricity starting from the currents of the tide. Taking as a starting point the fish, its form enables him continuously to remain to align in the direction of the current. It is while oscillating that BioStream extracts energy coming from displacement from water.
A computer on board adjusts the angle of the fin continuously. Energy is transferred according to this movement oscillating and is converted into electricity by O-Drive modules installed on BioStream.
The company BioPower Systems estimates that, for speeds of 2.5 m ⁄ s, BioStream would be able to produce 250kW. That makes it as interesting as his/her brother, BioWave, even if this company announced its deployment once technology bioWave is entirely carried out.
WaveRoller is an apparatus which transforms the vague navy into energy and electricity. The machine functions in the zones close to the littoral to approximately 0,3-2 km of the shore to depths between 8 and 20 meters. It is completely submerged and anchored to sea-bed. Only one unit of WaveRoller is evaluated between 500 and 1000 kw, with a factor of capacity from 25 to 50% according to the conditions of waves on the site.
The simple but very strong idea of the design of WaveRoller came as an awakening when the professional plunger Finnish Rauno Koivusaari explored a wreck. He noticed that a very heavy punt part of the ship moved ahead and behind, led by the energy of pushed water under surface, the vague navy. Since this first idea, the concept of WaveRoller knew various cycles of construction of prototypes, tested in laboratories, of achievements of simulations extremely sophisticated and numerical modelings before finally deploying the apparatus-tests in real conditions in maritime environment, adjusting the scale and to reiterate the development cycle.
WaveRoller functions basically in the same way as the punt part of the wreck that Rauno observed. The back and forth pass of water led by the displacement of vagueness initiates the walk of the composite panel. In order to maximize the energy which the WaveRoller panel can absorb of the waves, the apparatus is installed under surface with a depth from approximately 8 to 20 meters, where the displacement of vagueness is most powerful. The panel covers almost the totality of the water column since the bottom without breaking surface. That makes it possible the panel not to cover marine landscape and to avoid the creation of matter which would weigh down the structure.
When the WaveRoller panel moves and absorbs the energy of the vague navy, the piston pumps hydraulic attached to the panel pump the hydraulic fluid inside a closed hydraulic system. All the elements of the hydraulic system are locked up in a hermetic structure inside the apparatus and are not in contact with maritime environment. Consequently, there is no risk of escape in the ocean. The fluids with high pressure are introduced into a hydraulic engine which supplies an electric generator. The electric production of this power station with energy renewable houlomotrice is then conveyed with the electric distribution network via an underwater cable.
Energy production of only one WaveRoller, in other words the output of only one panel, varies between 500 and 1000 kw. The differences in output are due to the power of the waves locally.
One of the characteristics single of WaveRoller which guarantees its cost effectiveness while delivering a reliable energetic efficiency is its concept of operation and maintenance private individual. The units of WaveRoller consist of broad cisterns of ballast filled with air and thus being able to be submerged on their place of operation. These cisterns can be filled of water to submerge the base. Although the converter of energy houlomotrice remains entirely submerged during regular operation, it can be easily returned on the surface for maintenance by emptying the cisterns of ballast. No one is not thus need for complex, expensive and potentially dangerous operations of diving for the maintenance of WaveRoller. Moreover, the apparatus can be installed or brought into service without equipment at the additional cost such as large cranes or self-lifting platforms.
The Wavestar system is a power station houlomotrice which makes it possible to recover the energy of the waves. Constituted of floats connected to a platform by mechanical arms, the device makes a little think of a millepede. The oscillation produced upwards by the swell of the sea is transformed into electrical energy. The movement of the floats is transferred via a hydraulic system which actuates the rotation of a generator, producing electricity. The glass fiber floats and the metal arms are retractable at the time of large storms. The whole of electric connections and the machinery are with dryness inside the platform.
The thermal energy of seas (ETM)
The thermal energy of seas (ETM) or marethermic energy is produced by exploiting the difference in temperature between surface waters and deep water of the oceans. An acronym often met is OTEC, for thermal Ocean energy conversion.
Because of surface that they occupy, the seas and the oceans of the Earth behave like a gigantic sensor for:
the solar radiation (direct: solar flow absorptive by the ocean or indirect: radiation of the Earth reflected by the terrestrial atmosphere)
the energy of the wind (itself derived from solar energy).
Although part of this energy is dissipated (current, swell, frictions, etc), a great part heats the roadbases of the ocean. Thus on the surface, thanks to solar energy, the temperature of water is high (it can exceed the 25°C in intertropical zone) and; in-depth private of radiation solar, water is cold (with neighborhood of 2 with 4°C, except in seas closed, like the Mediterranean, whose floor cannot be "papered" by the "polar cool water puffs" which "plunge", in the north and the south of the Atlantic Ocean, with an average total flow of 25 million m ³ /seconde
This difference in temperature can be exploited by a heat engine. The latter needing a cold source and an hot source to produce energy, respectively uses water coming from the depths and surface water like sources.
Techniques of the E.T.M.
The produced E.T.M. of energy thanks to a working liquid (sea water, ammonia [NH3] or another fluid which to which the dew point is close to 4°C). This fluid passes from the liquid state to the state vapor in the evaporator, in contact with warm water drawn on the surface. The pressure produced by the vapor passes in a turbogenerator to make turn a turbine and to produce electricity, after the gas lost pressure, it passes in a condenser to turn over in the liquid state, in contact with in-depth drawn cool water.
The E.T.M. needs much water: one needs a very great sea water flow to compensate for the low effectiveness due to the weak variation in temperature and very large diameters of lines to limit the pressure losses. Currently, it is possible to use pipes in PEHD (Polyethylene High-density) of 1,5 meters diameter, but in the future if it builds stations of large powers, one will need lines 15 meters in diameter.
The E.T.M. functions with a differential of temperatures about 20°C. The higher the differential of temperature is, the more the production is high. While going down in-depth one draws colder water and the production with Iso-volume increases.
To date, there exist three types of power stations E.T.M.:
The cycle starts with the pumping of sea water of surface which is in the surroundings of 26°C. One introduces it into an evaporator which will be put vacuum, to support the effect of evaporation, because under negative relative pressure, evaporation occurs at lower temperature and the vapor is removed from salt, but on the water flow which crosses the evaporator, only steam 0,5% are produced, the remainder of water is returned to the sea with 21°C. The low pressure generated by the vapor is enough to actuate a turbogenerator which will produce electricity. Then, the vapor is transferred in the condenser to double wall, which with in-depth cool water pumped towards the 5°C, will make condense the fresh water vapor which could be used with consumption.
The closed cycle uses the same hardware as a heat pump (evaporator, condenser), but while a heat pump produces a thermal energy starting from an electrical energy, the closed cycle of a power station E.T.M. uses the opposite process. That wants to say that starting from a thermal energy, one will produce an electrical energy. One thus uses always warm water of surface which is with 26°C, that one puts in the evaporator at double wall. On a side, there will be water and other of ammonia NH3, and thus water will give its calories to ammonia to enable him to evaporate, because the ammonia has a temperature of evaporation lower than that of water. The water passed in the evaporator turns over to the sea, at the temperature of 23°C. The evaporated ammonia passes in a turbogenerator to produce electricity. Then the ammonia passes in a condenser to double wall to condense, because the ammonia passes its calories to in-depth drawn cool water to 5°C, to go back there to 9°C. Once condensed, the ammonia returns in the evaporator, thanks to a circulator, to remake the cycle.
The thermodynamic cycle of ammonia [NH3]
The thermodynamic cycle functions with several transformations after, which in fact thus a cycle. In all, there are four transformations:
Between 1 and 2 an adiabatic compression with the pump
Between 2 and 3 an isobar heating with the evaporator
Between 3 and 4 an adiabatic pressure drop with the turbogenerator
Between 4 and 1 an isobar cooling with the condenser
This cycle uses the two preceding techniques, because we find the cycle closed initially, with always the cycle of the ammonia which crosses the evaporator, the turbogenerator and the condenser, that is to say a thermodynamic cycle which produces electricity. The novel method is to install a second stage which will produce drinking water, thanks to an open cycle by using the water differential after the closed cycle.
Positive and negative points of the cycles
Remarks on the open cycle:
Production of drinking water in addition to electricity
Less wall in the evaporator thus less problems of bio stain
Large turbine because of the very expensive low pressure thus proceeded
Problem to make the air space
Remarks on the closed cycle:
Small turbogenerator thanks to the strong pressure, therefore less expensive
Bulky evaporator and with double wall, therefore more problems of bio stain
The use of ammonia is a problem for materials
Remarks on the hybrid cycle:
Product two energies in great quantity
Larger capital cost, because twice more hardware
Greater phenomenon of cooling of surface water
the combustible battery
A combustible battery is an electrochemical generator of energy making it possible to directly convert the chemical energy of a fuel (hydrogen, hydrocarbons, alcohols) in electrical energy without passing by thermal energy.
Principle of operation
The combustible battery functions on the opposite world of the electrolysis of water. Here, the source of tension is removed, one feeds out of hydrogen and oxygen and one notes the appearance of an electric tension between the two electrodes: the device became an electric generator which will function as a long time as it will be fed. For that it consists of two electrodes (anode and cathode) separated by an electrolyte, material which blocks the passage of the electrons but lets circulate the ions. The fuel containing hydrogen H2 is brought on the anode. H2 will be transformed into H+ ions and will release from the electrons which are collected by the anode. The H+ ions arrive on cathode where they combine with the ions O2 made up starting from oxygen in air, to form water. It is the transfer of the H+ ions and the electrons towards the cathode which will produce a continuous electric current starting from hydrogen. However this tension does not exceed 0,7 V per cell, it is thus necessary to use a great number of cells in series to obtain the necessary tension. The electric current produced by the pile is continuous, it is thus often necessary to place downstream from the pile an inverter allowing the transformation of the D.C. current into an alternative course, in particular when the installation is used to provide domestic current. The reaction is déclanchée using a catalyst. It is in general about a fine layer of turntable laid out on the electrodes (anode and cathode) One of the points criticizes, relating to the construction of the pile, is to be able to control in an optimal way the provisioning and the evacuation of the compounds feeding each cell, generally hydrogen and air, or in front of being evacuated, generally from water.
A combustible battery needs to be surrounded by components and subsystems to transform itself into generator of electricity. It needs :
a compressor of air
a subsystem of cooling
a control with its sensors, valves
Le combustible et son stockage
The fuel simplest to use is hydrogen. It is also him which makes it possible to obtain the highest densities of current. Its combustion produces only water in liquid form or of vapor. It is a containing hydrocarbon reagent and it is abundant. However it is flammable in the air or in the presence of oxygen. Moreover, colorless and odorless, it is a gas has to handle with precaution. Another disadvantage: it occupies much place, which proves to be problematic piles in the case of equipping with the vehicles. Research on the pile thus relates also to the storage tanks of hydrogen which one wants surer, lighter and more compact. One of the solutions thus consists in using a hydrocarbon or an alcohol like methanol.
The output of a combustible battery varies according to the type of pile and can be higher than 50%. As comparison, the output of an internal combustion engine is on average of 15%. Moreover, the energy not converted into electrical energy is emitted in the form of steam thus of heat which is used at ends of cogeneration.
Gas ⁄ liquid with the anode
Gas with cathode
10 to 100 kw
60°C with 90°C
Stack: 60-70% System: 62%
DBFC - Combustible battery with direct boron hydride
Protonic membrane Anion membrane
250mW ⁄ cm²
20°C with 80°C
PEMFC - Membrane combustible battery of exchange of protons
0,1 to 500 kw
60°C with 100°C
Stack: 50-70% System: 30-50%
portable, transport, stationary
DMFC - Combustible battery with direct methanol
MW to 100 kw
90°C with 120°C
DEFC - Combustible battery with direct ethanol
90°C with 20°C
FAFC - Combustible battery with formic acid
90°C with 120°C
PAFC - Combustible battery with phosphoric acid
up to 10 MW
Stack: 55% System: 40%
MCFC - Combustible battery with molten carbonate
Carbonate alkaline metals
dihydrogene, Methane, Gas off synthesis
up to 100 MW
Stack: 55% System: 47%
PCFC - Combustible battery with proton ceramics
SOFC - Combustible battery with solid oxide
dihydrogene, Methane, Gas off synthesis
up to 100 MW
800°C with 1050°C
Stack: 60-65% System: 55-60%
The power stations with geothermics
The power stations with geothermics exploits the sources heat naturalness as in Iceland, the vapor naturally produced in the depths of the ground at a temperature 150° to see more is to channel in pipes to be to convey towards turbines which actuates generators
The power stations with thermics
The principle is simple, a boiler being able to run on wood, coal, gas, fuel oil, oil, heat a water tank which this transforms into vapor which once under pressure involves the rotation of the turbine which via a driveshaft actuates the generator
Generation of nuclear reactor
Technologies of nuclear reactor are classified in terms of generation. This classification was created in 2001, during the launching of the International forum Generation IV. The chronology of the various generations corresponds to the date of maturity of associated technologies, allowing a deployment on an industrial scale.
One distinguishes from this way four generations of engines:
The first gathers the engines built before 1970.
The second indicates the engines built between 1970 and 1998.
The third is that of the engines derived from the precedents and designed to replace them from 2010 ⁄ 2020.
The fourth indicates the other engines in the course of design belonging to the 6 dies defined by the International forum Generation IV, and which could enter in service by 2030.
There exists a special category of nuclear reactors of fourth generation, simplified compared to those described in treaty GIF, likely to transmute nuclear waste of factories. By accelerating importance work of developments of this special category, industry should be able at the latest to develop them for 2020.
Description of the generations
Generation I indicates the first engines built before 1970
Magnox (Great Britain)
UNGG, HWGCR, ChoozA, PWR
Generation II indicates the industrial engines built between 1970 and 1998 and currently in service
primarily of the die pressurized water reactor
The principal types of nuclear reactors currently built in the world are engines of 2nd generation
AGR: Advanced engine with gas
RBMK: Ebullient water reactor, moderated with graphite, of Soviet design.
REB: Ebullient water reactor (REB)
PHWR: Pressurized heavy-water reactor
REP or PWR: Pressurized water reactor
WWER: Pressurized water reactor of Soviet design
CANDU: Nuclear reactor with natural uranium with heavy water conceived in Canada
Generation III: improvements of the second generation
generation III indicates the engines designed as from the years 1990 and which thus take into account the experience feedback of the preceding generations.
Generation III+: The engines known as of generation III+ constitute an evolution of the 3rd generation, in fact the engines would be put in exploitation as from years 2010 before the potential arrival of those studied for Generation 4.
Generation IV: to close the technological cycle
Generation IV indicates the six dies being studied, at the beginning of 2011, within the International forum Generation IV and whose engines could enter in service by 2030
The nuclear reactors in project of Generation IV are:
high-temperature reactor (VHTR)
supercritical water reactor (SCWR)
fast engine with coolant gas (GFR)
fast reactor with coolant sodium (SFR)
fast engine with coolant lead (LFR)
engine with molten salts (MSR).
Moreover, there exist subcritical projects of engines (hybrid nuclear reactor controlled by accelerator or Rubbiatron), possibly dedicated to the transmutation.
A posteriori, one can classify the engines Phoenix and Super-Phenix like prototypes of engines of generation IV. ASTRID, their successor and new prototype of 600MWe of the ECA should be brought into service at the end of 2020
A nuclear fission of Uranium 235.
A nuclear plant is an industrial site which uses the fission of atomic nuclei to produce heat, of which a part is transformed into electricity (between 30% and 40% according to the difference in temperature between the cold and hot source). It is the principal implementation of nuclear energy in the civil field.
A nuclear plant consists of one or more nuclear reactors whose electric output varies few megawatts with more than 1500 megawatts.
Since 1951, the first nuclear plant enters in service to the United States. June 27th, 1954, a civil nuclear plant is connected to the electrical communication with Obninsk in Soviet Union, with a power of electrical production of five megawatts. The following nuclear plants were those of Marcoule in Provence on January 7th, 1956, Sellafield in the United Kingdom, connected to the network in 1956, and the nuclear reactor of Shippingport in the United States, connected in 1957. This same year, the building work of the first engine of civil use in France (EDF1) started with the nuclear plant of Chinon.
The world nuclear power increased quickly, rising of more than 1 gigawatt (GW) in 1960 up to 100 GW at the end of years 1970, and 300 GW at the end of the years 1980. Since, the world capacity increased much more slowly, reaching 366 GW in 2005, because of the Chinese nuclear program. Between 1970 and 1990 were built more than 5 GW per annum (with a peak of 33 GW in 1984). More of two thirds of the nuclear plants ordered after January 1970 were cancelled.
Increasing economic costs, due to the increasingly long durations of construction, and the low costs of fossile fuels, made the nuclear power less competitive in the years 1980 and 1990. In addition, in certain countries, opinion public the, anxious one of the nuclear risks of accident and the problem of the radioactive waste, resulted in giving up nuclear energy.
The heart of the engine contains the tight fuel assemblies and the control rods, in water in the liquid state (standard REFERENCE MARK).
A nuclear plant gathers the whole of the installations allowing the electrical production on a given site. It frequently includes ⁄ understands several sections, identical or not; each section corresponds to a group of installations designed to provide a given electric output (for example 900 MWe, 300 MWe or 1.450 MWe). In France, a section generally includes/understands:
the building engine, generally double enclosure seals which contains the nuclear reactor, the pressurisor which has as a function to maintain the water (treated) of the primary circuit in the liquid state, the steam generators (three or four according to the generation), the primary motor-driven pump group being used to make circulate the coolant (water), the primary water circuit, whose major role is to ensure the heat transfer between the heart of the engine and the steam generators and part of the secondary water circuit
the combustible building: stuck to the building engine, it is used as storage of the assemblies of front nuclear fuel, during the stops of section and for the cooling of discharged fuel (a third of fuel is replaced all the 12 to 18 months). The fuel is maintained immersed in swimming pools of which water is used as radiological screen
the building salle des machines, which contains mainly:
a line of trees including/understanding the various stages of the steam turbine and the alternator (turbine generator set)
the condenser, follow-up of food turbopumps (normal functioning, of help)
peripheral buildings of exploitation (control room)
additional buildings which contain in particular various installations of auxiliary circuits necessary for the operation of the nuclear reactor and maintenance, the electric feeding all auxiliaries and generating control panels Diesel of help
a pumping station for the sections whose cooling uses sea water, of river or river, and possibly a tower aéroréfrigérante (the most visible part of a nuclear plant; 28 m height for the CNPE of Chinon, up to 178 m for the CNPE of Civaux.
The other installations of the powerplant include ⁄ understand:
one or more electric stations allowing electric network connection via one or several lines high voltage, as well as an interconnection limited between sections
the buildings technique and administrative, a general store
A nuclear plant has same operation as a boiler. A fuel (in fact nuclear) makes it possible to create heat. This heat makes it possible through an exchanger to transform water into vapor, which accelerated will actuate a turbine mechanically. This turbine will actuate in its turn an alternator which will produce electricity. The production of heat is obtained in the engine. The exchanger bears the name of Steam Generator (steam generator).
In a nuclear section, the nuclear reactor is upstream of a thermal installation which produces vapor transformed into mechanical energy by means of a steam turbine, the alternator uses then this mechanical energy to produce electricity.
The essential difference between a nuclear plant and a traditional thermo plant is materialized by the replacement of a consuming whole of boilers of fossile fuels by nuclear reactors.
To recover mechanical energy starting from heat, it is necessary to have a thermodynamic circuit: an hot source, a circulation and a cold source. In a nuclear plant, this circuit is forced (because use of pumps). To simplify:
for an engine of the REP type, Pressurized water reactor), the hot source is provided by the water of the primary circuit, with the average temperature of 306 °C, (286 °C in input and 323 °C at output of engine), the latter varying according to the power of the section)
the cold source of the coolant circuit can be provided by pumping of sea water or river (the system is sometimes supplemented of a tower aéroréfrigérante).
Thus, a nuclear section of REP type comprises three independent important water circuits, hereafter detailed.
The primary circuit is located in a containment. It consists of an integral engine of the bunches of control and the fuel and according to the type of section, of 3 or 4 steam generators (Steam Generator) respectively associated with a primary pump centrifuges (one by Steam Generator; mass of approximately 90 T), a pressurisor (including/understanding heating sheaths) ensuring the maintenance of the pressure of the circuit 155 bar. It conveys, in closed circuit, of liquid water under pressure which extracts the calories from fuel to transport them to the steam generators (role of coolant). The water of the primary circuit also has like utility the moderation of the neutrons (role of regulator) resulting from nuclear fission.
The thermalization of the neutrons slows down them to enable them to interact with the Uranium 235 atoms and to start the fission of their core. In addition, water gets a stabilizing effect with the engine: if the reaction packed, the temperature of fuel and water would increase. That would cause on the one hand, an absorption of the neutrons by fuel (effect combustible) and on the other hand a less moderation of water (moderating effect). The office plurality of these two effects is known as power effect: the increase in this term would cause the smothering of the reaction of itself, it is a car-stabilizing effect.
The secondary water circuit breaks up into two parts:
between the condenser and the steam generators, water remains in liquid form: it is the power supply of the steam generators; food turbopumps make it possible to raise the pressure of this water, and of the exchangers of heat the temperature (60 of it bar and 220 °C) raise
this water vaporizes in 3 or 4 steam generators (according to the type of section, 900 or 1300/1450 MW) and vapor pipings feed successively the stages of the turbine laid out on the same line of trees. The vapor acquires an high speed at the time of its relaxation thus making it possible to involve the wheels with bladings of the turbine.
This one is made up of several separate stages and comprising each one of many wheels of different diameter. Initially, the vapor undergoes a first relaxation in a body high pressure (HP, from 55 to 11 bar), then it is recovered, dried and overheated to undergo one second relaxation in the three bodies low pressure (PO BOX, from 11 to 0,05 bar).
One uses the BP bodies with an aim of increasing the output of the thermohydraulic cycle. The output of the last stage of the turbine gives directly on the condenser, an exchanger of heat whose pressure is maintained to approximately 50 absolute mbar (vacuum) by the temperature of the water of the coolant circuit (according to the curve of saturation water ⁄ vapor). Vacuum pumps extract incondensable gases in phase gas from the mixture (mainly molecular oxygen and diazotizes it). The condensate in this apparatus is re-used for réalimenter the steam generators.
Half-open coolant circuit
This circuit ensures the cooling of the condenser. The cooling water is exchanged directly with the sea, a river or a river, via circulating pumps. For these the last two cases, water can be cooled by a draft in a tower aéroréfrigérante from where a small section (1,5%, are 0,5 m ³ ⁄ s for a section of 900 MW) of water escapes out of vapor in the form of white plume.
The mechanical energy produced by the turbine is used to actuate the alternator (rotor of a mass of approximately 150 T) which converts it into electrical energy, this one being conveyed by the electrical communication.
When the nuclear section outputs electric output on the network, it is said that it is coupled with the network. The inopportune disconnection of the alternator to the network (called a release) requires an immediate reduction of the vapor supply of the turbine by control valves laid out on vapor pipings, or else its disk speed would increase until its destruction because of the excessive generating couple being exerted then on the bladings. Nevertheless, in this case, the section remains in service with low power: the turbine is in rotation and remains ready with the immediate recouplage on the network (the section is then controlled: it feeds itself its auxiliaries).
The normal functioning of a nuclear plant is described as criticism (in opposition to subcritical and supercritical).
Reliability of a nuclear plant
The major accident examined by the studies of safety is the core fusion.
For the French nuclear plants of first generation, the objective was to have a probability of core fusion lower than 5 out of 100.000 per engine and per annum. This safety was improved in the second generation. The figures for the German power stations are comparable. This level of safety was a little higher than that noted in the rest of the world: at the beginning of 2009, nuclear industry had accumulated 13.000 years a total experience X engine of operation.
A major catastrophe occurred on one of these power stations of first generation of the type RBMK (only having reached level 7 on the scale INNATE), the explosion of the power station of Tchernobyl in 1986, and two other accidents having led to the destruction of the heart: the fire of Sellafield of 1957, and the accident of Three Mile Island in 1979.
The studies of nuclear safety were systematized following these accidents, and are controlled in France by the authority of nuclear safety (ASN), now independent of the executive power, assisted of a technical organization, IRSN. The power stations of second generation have in France an objective of safety fifty times more raised, about an accident per million years of operation.
For this level of safety, with a world park twenty times more important than currently (about 500 engines), the level of risk would be lower than an accident per millenium. Moreover, the design of these modern power stations must show that an accident of core fusion (if it occurs) remains confined in the power station itself and does not lead to a contamination of the population.
The design of the nuclear plants of fourth generation is the subject of an international coordination, which includes studies of safety, and must be based on intrinsically sure designs.
Risk exposure to the ionizing ray
In December 2007, the results of the study of the German national Register of cancers of the child were made public by its director Maria Blettner: the study indicates that one observes in Germany a relation between the proximity of a dwelling compared to the nuclear plant nearest and is likely it for the children to be reached, before the 5 years age, of a cancer or a leukemia. For as much, the ionizing ray cannot be interpreted in theory as a cause, the exposure to the ionizing ray neither not having been measured nor modelled.
Output of a nuclear plant
The theoretical yield of conversion of the installations of a nuclear plant is equal to 33% for which it is necessary to add on-line losses on the network Très High voltage.
A generating nuclear reactor cannot be used to make cogeneration. This would come down increasing the temperature of the cold source and thus decreasing the difference in temperature between the sources having for consequence a fall of the output of electrical production. In a thermo plant with cogeneration, in fact the exhaust fumes are used to produce vapor which is used for the district heating.
Various types of engines
A nuclear plant is equipped with one or more nuclear reactors. A nuclear reactor can belong to various dies:
ebullient water reactor, moderated with the graphite of Soviet design (RBMK)
reactor natural-uranium-fueled, moderated by graphite, cooled by carbon dioxide (die natural uranium graphite gas or UNGG), of which the first engine of civil use in France (EDF1). This die was abandoned for the die REFERENCE MARK for economic reasons. The French power stations of this type are currently all with the stop, on the other hand, certain British power stations of the same type (Magnox) are still in service
using engine of the natural uranium moderated by heavy water (Canadian die CANDU)
pressurized water reactor (REFERENCE MARK) (English PWR), this type of engine uses uranium oxide enriched like fuel, and moderate and is cooled by ordinary water under pressure. The REFERENCE MARKS constitute the essence of the current park: 60% in the world and 80% in Europe. An alternative is the pressurized water reactor of Soviet design (WWER)
ebullient water reactor (REB) (English BWR), this type of engine is rather similar to a pressurized water reactor, with the important difference that primary water vaporizes in the heart of the engine, this under normal functioning
pressurized heavy-water reactor (PHWR)
advanced engine with gas (AGR)
nuclear reactor with fast neutrons and coolant sodium, like the European Super-Phenix or the Russian BN-600
engine with molten salts
liquid engine with uranyl nitrate
Power station with inertial fusion
A power station with inertial fusion is intended industrially to produce electricity starting from the energy of fusion by techniques of inertial confinement. This type of power station is still at the stage of search.
One often considers that the only process of fusion which is likely to lead in the medium term (from here a few decades) to the civil production of energy is the tokamak die using the technique of magnetic containment, represented by project ITER. However, of the recent studies allow to consider, parallel to the die tokamak, the installation of one second die of production using such power stations with inertial fusion.
The nuclear fusion of deuterium and tritium.
Contrary to fission in which cores of heavy atoms are divided in order to form lighter cores, fusion occurs when two cores of light atoms meet to form a heavier core. In both cases, the total mass of the produced cores being lower than the mass of origin, the difference is transformed into energy according to the famous formula of Einstein E=mc ² (where E is produced energy, m the disappeared mass and C speed of light in the vacuum).
Fission uses as fuel uranium or plutonium, uranium is an element available naturally (although in limited quantities), and plutonium is an artificial element produced thanks to nuclear reactions.
Civil fusion uses isotopes of hydrogen: deuterium (component of heavy water), in quantities quasi unlimited in the oceans, and the tritium, existing naturally in small amounts in the atmosphere, but especially artificially produced other elements, like lithium, are used in the bombs H.
Techniques of civil production of energy of fusion
Two concurrent techniques are candidates with the civil production of energy of fusion:
fusion by magnetic containment: it is that which is implemented in project ITER, the engines using this technique consist of a vast enclosure in the shape of core, inside which a plasma made up of a fuel of fusion (mixture of deuterium and tritium in the current project), confined by intense magnetic fields, is carried to very high temperature (more than 100 million degrees) to allow the reactions fusion to occur, this type of engine is intended to work at a quasi continuous normal rate
fusion by inertial confinement: it is that which would be implemented in the engines with inertial fusion in project; energy would come, not of a plasma amalgamating continuously, but fusion of fuel microcapsules, repeated in a cyclic way, according to a principle similar to that of the spark-ignition engine, fusion being obtained thanks to the density and at the temperature reached in the microcapsule when it is subjected to a laser radiation (inertial confinement by laser), to a beam of particles (inertial confinement by beam of ions) or to a process of magnetic constriction (inertial confinement by magnetic constriction).
History of energies of fusion
Fission as fusion were initially used in the military field, for the realization of bombs of very strong power: bombs has for fission and H-bombs for fusion. It is besides a small bomb has which is used as match with the bomb H by producing energy necessary to the detonation of this one.
The civil applications, for which the energy production is not made any more in an explosive way, but in a controlled form, appeared only thereafter. If, for fission, it ran out less than 10 years between the military applications and the civil production of energy, it was not in the same way for fusion, more than 50 years being already run out without no power station of production being still putting into operation.
The first patent of engine with fusion was deposited in 1946 by the Authority of the Atomic energy of the United Kingdom, the invention being due to Sir George Paget Thomson and Moses Blackman. One finds there already certain basic principles used in project ITER: the vacuum chamber in the shape of core, magnetic containment, and heating of plasma by waves radio frequency.
In the field of magnetic containment, they are the theoretical work completed in 1950-1951 in Soviet Union by that is to say Tamm and A.D. Sakharov which provided the foundations of what will become tokamak, the research and development realized then within the Kurchatov Institute of Moscow having led to the concretization of these ideas. Equipment of search for this type was developed thereafter in many countries and, although the stellarator has it one competed with moment, it is the principle of tokamak which was retained for international project ITER.
A wire cage used in containment by magnetic constriction
The phenomenon of the magnetic constriction is known since the end of the XVIII E century. Its use in the field of fusion is resulting from search carried out on toroidal devices, initially at the Laboratory of Los Alamos since 1952 (Perhapsatron), and in Great Britain starting from 1954 (ZETA), but the physical principles remained a long time badly included/understood and badly controls. This technique was effectively implemented only with the appearance of the principle of the cage at wire in the years 1980.
Although the use of lasers to start reactions of fusion was considered before, the first serious experiments took place only after the design of lasers sufficiently powerful, in the middle of the years 1970. The technique of ablative implosion of a microcapsule irradiated by beams laser, bases inertial confinement by laser, was proposed in 1972 by Lawrence Livermore National Laboratory.
Advantages of fusion
The partisans of the energy of fusion put forward many potential advantages compared to the other energy sources electric:
no gas with greenhouse effect, like carbonic gas, is emitted
the tritium or deuterium fuel, made up (isotopes of hydrogen) in the majority of the current projects, does not present any risk of shortage: deuterium exists in quantity quasi unlimited in the oceans, and the tritium is a by-product of the production of nuclear energy, as well of fission as of fusion
the quantity of radioactive waste is much weaker than that produced by the nuclear reactors with fission currently used but especially, the radioactive half-life of waste is much shorter, about a few tens of years, against hundreds of thousands of years, even of the million years, for waste of the engines with fission.
Nuclear reactor with molten salts
General diagram of an engine with molten salt
A nuclear reactor with molten salts (RSF) is a type of nuclear reactor in which the nuclear fuel is presented in the form of salt to low melting point. Molten salt plays at the same time the part of fuel and coolant. The engine is moderated by graphite.
The concept was evaluated and retained within the International forum Generation IV. It is the subject of studies and research for a deployment like engine of fourth generation with however an estimated date of industrialization more distant than the other studied concepts. Many proposals for a design of nuclear plant are founded on this type of engine, but there were few built prototypes.
The engines with molten salts have, in configuration thermal neutron, a bored graphite heart of channels in which circulates a fissile and fertile material salt, for example uranium tetrafluoride (UF4). The liquid becomes critical when it passes in the heart graphite which is used as regulator. It is also possible to function without regulator in fast neutron, this configuration eliminates the problem from the recycling of graphite and improves the regeneration rate but requires more fissile fuel to start.
The concept associates with the engine a treatment plant of fuel spent in line, charged with progressively separating the fission products from their production out of engine.
The nuclear reactor of Shippingport, during an experimental irradiation, proved the feasibility of breeding in epithermal spectrum, with a fuel uranium 233 on support thorium.
In the years 1960, research on the engines with molten salts was primarily undertaken by the National laboratory of Oak Ridge, most of their work leading to the experimental reactor with molten salt. The MSRE was an engine of test of 7,4 MWth, intended to simulate the neutronics (in epithermal neutrons) of the core breeder to intrinsically sure thorium. The MSRE was critical in 1965 and functioned four years. Its fuel was a salt lif-BeF2-ZrF4-UF4 (65-30-5-0.1), moderated with pyrolytic graphite, and its liquid of secondary cooling was of FliBe (2liF-BeF2). It reached 650°C and was exploited with full power during about a year and half. Tests were also carried out with plutonium salts.
The liquid fuel 233UF4 which was tested proved the feasibility and the very gravitational character of a cycle of nuclear fuel founded on the thorium, which minimizes waste, of the produced radioactive waste having a half-life of less than 50 years. In addition, the operating temperature of the engine with 650°C allows a good thermal efficiency of the supplied engines, for example of the gas turbines.
This research led during the time 1970-76 to a design MSR which would use the salt lif BeF2-ThF4-UF4 (72-16-12-0.4) like fuel, moderated by graphite replaced every four years, and using of NaF-NaBF4 like liquid of secondary cooling, with a temperature of heart of 705°C. However, to date, this engine with molten salts remains with the state of study.
The engines with molten salts constitute one of the options of research retained within the framework of the International forum Generation IV.
Advantages related to molten salts
It is a reliable operation, and of easy maintenance. Fluorine salts are chemically and mechanically stable, with the atmospheric pressure, in spite of a strong temperature and an intense radioactivity. The fluorine combines ioniquement with practically all the fission products, which makes it possible to evacuate them easily. Even rare gases - in particular the xenon-135, an important neutron poison - leave in a way foreseeable and controllable with the level of the pump, where the fuel is at lower temperature. Even in the event of accident, dispersion in the biosphere is not very probable. Salts do not burn in the air or water, and fluoride salts do not dissolve in water.
There is no vapor with high pressure in the heart, but simply of melted salts with low pressure. That means that this die cannot lead to an explosion due to the vapor, and thus does not require a reactor vessel resistant to the high pressure, the most expensive part of a pressurized water reactor. In the place, a tank resistant to low pressure is enough to contain molten salts. To resist heat and corrosion, the metal of the tank is an exotic alloy (Hastelloy-N) containing nickel, but there is much less than for a reactor vessel to pressurized water, and this metal is less expensive to format and to weld.
The engine functions in thermal or epithermal neutron spectrum. This mode of operation makes it possible drastiquement to reduce the neutron escapes compared to a fast spectrum. The heart is thus much more compact than those of the fast reactors with a reduced mass of heavy metals of a factor approximately 10. The piloting of the heart is also facilitated.
It makes it possible to easily use nuclear the fuel cycle founded on the thorium, which is hardly practical in other dies.
The molten salt shape allows the treatment on line.
The engine with molten salt functions at a temperature much higher than the light water reactors, about 650°C in preserving designs, until 950°C in the engines at very high temperature. They are thus very effective generators for the cycle of Brayton. This great thermal efficiency is one of the objectives of engines of generation IV.
A SPTD can function as well in small sizes as into large, so that an installation could easily build several small engines (for example of 100 MWe), reducing the operational and financial risks.
Like all the nuclear plants, such an engine has little effect on the biosphere. It requires only one weak surface (if one does not take account of mining related to obtaining the basic commodities), of constructions relatively not very important, and waste is managed in a separate way.
Advantages related to the reprocessing on line
The nuclear fuel of an engine with molten salts can be reprocessed by a small additional chemical installation. Weinberg could note that a reduced installation can ensure the reprocessing necessary for an engine of great power of 1 GW: All salt must be reprocessed, but only every ten days. The assessment in scrap such an engine is thus much less heavy than in a conventional engine with light water, which transfers the whole hearts to the factories from recycling. Moreover, all, except the fuel and waste, remains on the spot in the factory.
The process of reprocessing used is the following:
A treatment fluorine to eliminate uranium-233 from salt. That must be made before the following stage.
A distillation column 4 height meters to molten bismuth separates protactinium from combustible salt.
An intermediate tank of storage makes it possible to let put back protactinium coming from the column, time that it is transformed into uranium-233. With a 27 days half-life, ten months of storage ensure a transformation uranium 99,9%.
A small installation of distillation in vapor phase of fluoride salts. Each salt has a temperature of evaporation. Light salts evaporate at low temperature, and form most of salt. Thorium salts must be separate waste of fission at higher temperatures.
The quantities concerned are of approximately 800 kg of waste per annum and by generated GW, which implies rather weak equipment. Salts of transuranians at long life can be separate, or return in the engine and be used as fuel.
Advantages related to the cycle of thorium
When one combines it with the reprocessing of fuel, the cycle of thorium produces only 0,1% of highly radioactive waste with long life which produces without reprocessing a light water reactor (die of all the modern engines in the United States or in France).
When thorium-232 captures a neutron, it is transformed into Th233, which disintegrates quickly out of protactinium (Pa233). Pa233 disintegrates in its turn in U233 with a 27 days half-life. Uranium-233 is a strongly radioactive isotope of uranium (159.200 years half-life), but it does not leave the engine. This uranium 233, which does not exist in nature, is an excellent fissile isotope. It is the nuclear fuel primarily exploited by this cycle. When U233 is bombarded by thermal neutrons, the neutrons generally lead to a fission.
A uranium-233 atom can also absorb the neutron (with a probability from approximately 1/7 or less) to produce uranium-234 (twice less radioactive than U-233). This product of activation generally will end up absorbing another neutron to become of the Uranium 235, fissionable, which fissions under similar conditions with those of U233, and thus contributes to the operation of the engine like nuclear fuel. It can (with a probability from approximately 1/6) be also transformed into uranium-236, very slightly radioactive (half-life of 23 million years), which will circulate with the remainder of uranium and will end up absorbing an additional neutron transforming it into uranium-237 (6,75 days half-life) then out of relatively stable neptunium-237 (half-life of 2,2 million years).
Neptunium can be separate chemically salt melted by reprocessing, and eliminated like waste. This neptunium-237 is normally the only transuranic radioactive waste of high radioactivity to long life, it accounts for approximately 2 to 3% of the initially produced quantity of uranium-233.
If neptunium remains in the circulation of the engine, it undergoes a third neutron capture which transforms it into neptunium 238, unstable of half-life 2,1 days, which is transformed into plutonium 238, strongly radioactive (a 86,41 years half-life). Just as neptunium, plutonium can be separate chemically, and forms this time a radioactive waste with high activity and short life. This plutonium-238 is specific to the die of thorium: not being fissionable it is not proliferating, and its relatively short life allows a management of waste historical scale.
If plutonium is in its turn left in the flow of the engine, it will continue to absorb the neutrons, successively creating all the isotopes of plutonium between 238 and 242 (according to the same reactions as those met in the die uranium-plutonium, which passes directly from U-238 to Pu-239). In this progression, a majority of atoms will disappear at the time of the fissionable stages, plutonium-239 and plutonium-241. The remainder will finish with a even weaker probability like isotopes of the series of minor actinides, americium and curium.
The fuel cycle thorium thus combines at the same time the advantages of an intrinsic safety of the engines, an abundant source of fuel in the long run, and the absence of expensive installations of enrichment of the nuclear fuel.
The continuous reprocessing makes it possible an engine with molten salt to use more than 97% of its nuclear fuel. It is much more effective than what is obtained by any other die. As comparison, the light water reactors consume only approximately 2% of their fuel in the open cycle. Moreover, with the salt of distillation, a MSFR can burn fluorinated plutonium, or even nuclear waste coming from light water reactors.
As for the fast engines, of the cycles uranium 238/plutonium or thorium/uranium 233 are possible. Considering the good neutron properties of uranium 233 in thermal and epithermal spectrum, the cycle with thorium is privileged.
A scenario of transition from the park REP towards a park of breeder RSF thorium/uranium 233 would thus consist in burning existing plutonium in REP on matrix thorium of kind to constitute a uranium 233 stock for the starting of RSF. The advantage of a cycle containing uranium 233 is not to introduce uranium 238 in RSF and thus to limit the production of plutonium and actinides minor, which is favorable from the waste point of view. Some 1200 kg of uranium 233 are necessary to the starting of a RSF. The scenario of transition implying construction from a breeder reactor would make it possible to produce some 200 kg of uranium 233 per annum. Six years of operation will be thus necessary to the starting of a RSF.
Cold fusion with hydrogen and Nickel
a possible future
A heap of questions had emerged after the two remarkable researchers Andrea Rossi and Sergio Focardi reported their discovery on the cold fusion which proved to be a success. During this time, other scientists tried to confirm the basic principles involving the reactions which occur in cold fusion. However, some failed to obtain a cold fusion as of their first tests, while others indeed succeeded.
That created more controversies and caused more curiosity on operation of cold fusion with hydrogen and nickel at temperatures lower than 1000K, as affirms it Andrea Rossi and Sergio Focardi. That contradicted the principles of the nuclear physics, which pushed many scientists to unceasingly contribute in order to include ⁄ understand this process. Here a report of what could probably explain how cold fusion could go with the fusion of thehydrogen one.
The process: The two products resulting from the process of nickel-hydrogen fusion are: isotopes of copper and energy. The copper isotope disintegrates by producing a different nickel isotope which releases more energy. According to Andrea Rossi and Sergio Focardi, they could successfully develop a fusion reactor cold while being based on this principle. This reaction is estimated to be able to produce 12.400 Watts of thermal energy with only little consumption of electricity estimated at 400 Watts.
In January, they held a press conference to describe the operation of their apparatus. Andrea Rossi and Sergio Focardi explained why when the atomic nuclei of hydrogen and nickel amalgamate in their device or fusion reactor cold, less than one gram of hydrogen are used in the engine which starts with 1000 Watts of electricity. After a few minutes, the quantity of electricity is reduced to 400 Watts. At the time when the reaction proceeds 292 grams of water with 20°C are converted into vapor with 101 ° C.
The Principle of the reaction: Professor Christos Stremmenos provided a reasonable theory on the way in which cold fusion proceeds with hydrogen and nickel. He supported the theory of Rossi Andrea and Sergio Focardi which explains why the nickel cores being in a crystalline structure amalgamate with the hydrogen cores which are diffused in the nickel cores. The Forces of Coulomb are taken again by the resulting nuclear forces. Le Nickel acts like a catalyst and breaks up the molecules Bi of hydrogen transforming them into individual molecules. At the same time, these hydrogen molecules come into contact with the surface of the nickel atoms. The electrons in the hydrogen atoms settle on the nickel atom in the band of Fermi and are diffused more deeply in the crystalline structure of the nickel atoms.
Thus fusion nickel hydrogen occurs. Professor Christos Stremmenos also believes that the electrons in the central cavity of the nickel crystal result in a force from protection. This shield hangs the deuterium or hydrogen cores in the nickel atom. Stremmenos suggests that is useful like an energy source for the reaction of cold fusion. By continuation, the hydrogen atoms captured in nickel result in exothermic nuclear reactions which produce isotopes since the fusion of thehydrogen one. In addition, professor Christos Stremmenosa being a physicist took a qualitative approach to render comprehensible this theory. For that, It was based on three theories to explain:
The hydrogen atom of Bohr: The hydrogen atom, or the hydrogen atom of Bohr always remains in a stationary state if no energy is applied to him. That is explained by the wave in phase (of Broglie), which keeps a circular trajectory of the electron in orbit. The ray of the circular trajectory is determined by the fundamental states of energy of the atom. Once the hydrogen atoms come into contact with the nickel cores, they give up their stationary state and release their electrons.
The electrons are deposited in the band of conductivity of the nickel atom and are spread quickly in its crystalline structure. If there are tetrahedral or octahedral spaces in the crystal lattice, they will occupy these empty spaces. These electrons deposited will create a cloud of conductivity of electrons which will be distributed in the energy bands (band of Fermi). That allows a free displacement of the electrons on all the metal mass. It is there that the Heisenberg uncertainty principle between concerned.
The Heisenberg uncertainty principle: The nonlocalized electrons, having a dynamic state, are in a state of uncertainty which is explained by the Heisenberg uncertainty principle. That probably lasts from 10 to 18 seconds when a series of neutral hydrogen mini-atoms could be formed. They could be in an unstable state, varied sizes and on various energy levels while they are within the band of Fermi.Les neutral mini-atoms of hydrogen a high energy and a short wavelength have which is due to the cyclic orbits (of Broglie). They are captured by the nuclear reaction within the crystalline structure and that occurs during the 10 to 20 seconds. The hydrogen atoms amalgamate then with the nickel cores. However, they must have a dimension lower by 10 to 14. The assumption which is posed here is that only some atoms will satisfy the condition of Broglie.
Nuclear reactions at high-speed: Andrea Rossi and Sergio Focardi proposed a mechanism checked by the data of the spectroscopy of mass. It is predicted that the nickel cores change into isotopes of unstable copper cores. However, professor Christos Stremmenos confirms that the hydrogen mini-atoms imprisoned in the nickel cores undergo a destruction a method which was predicted by Andrea Rossi and Sergio Focardi. That causes the decline of the copper cores being produced. The conduit with the emission of photons with very high energy. To conclude, it is the best explanation of the operation of cold fusion with hydrogen and Nickel. As that can appear, can be impossible or against the laws of the nuclear physics, the existence of the reaction of cold fusion remains effective.