I am back from Vancouver where attended the Cisco Connect Roadshow
event where I was able to get a feel of what they think is innovation
of today thanks to the excellent presentation of Victor Woo, General
Manager, Industry Transformation at Cisco Canada: "Fueling Innovation with the Internet of Everything". Notes below are some thoughts arising from and illustrating Victor's talk and other sessions.
Thanks to widespread Internet adoption and over 10 billion connected
devices around the world, companies today are more excited than ever
about the Internet of Things. Add in the hype about Google Glass and the
Nest Thermostat, and nearly every business, including those from
traditionally low-tech industries, wants to get on the cloud, track a
group of devices, and gather data. The question, however, is not if a
device can be connected, but why the company is connecting a previously
“dumb” product to the cloud. Or stated differently, if a company invests
in making my toaster talk to my lawnmower, is that really a good
business model and why?
Companies that are successfully adapting and innovating Internet of
Things platforms are focused on identifying meaningful opportunities,
not just technologies.
Various forms of
post-implementation maintenance will become more common, and may change
how you deal with your vendors. The Internet of Things (IoT) goes well
beyond human Internet users.
Some predict that by 2015, not only will 75% of the world population
have access to the Internet, so will 6+ billion devices.
Traditional business models focused on the sale of an item, with
post-sale revenue coming from maintenance. The maintenance was usually
an on-call service without real-time monitoring. Many will still follow this business model, however this model is
vulnerable to shrinking profit margins in very competitive markets.
The world we live in is becoming increasingly complex and increasingly connected.
More than 12 billion people and things are communicating today via the Internet; yet it is estimated that more than 99% of objects are still unconnected.
Advanced standardization of communication protocols and the
consequent rapid global adoption of IP and the Internet is moving from
the information age into the networking age. The Internet provides the
technical and human network to connect people with processes with data
and things. As the Internet of Everything
(IoE) connects the unconnected, it is expected that more than 50
billion smart objects will communicate freely over the Internet by 2020
and early indicators show that it might be a conservative estimate.
Acknowledgement: Victor Woo, Rick Huijbregts, Gary Audin
Connecting new technologies and future vision in adaptable System Architecture
Saturday, November 30, 2013
Friday, November 8, 2013
The History of Innovation - Part 2 "Christensen Effect"
Recently I was introduced to a series of lectures by a Harvard Business School Professor Clayton Christensen, who is considered an architect of the Disruptive Innovation concept.
I must admit, I got somewhat addicted to the Clayton's soft manner of speaking and often subtle humor hidden under his serious voice. Good example: Innovator's Dilemma.
The main idea he is promoting seems so obvious after you hear it, perhaps also due to the illustrative way he delivers it.
However, while it is very immersing to listen to Professor Christensen, and his ideas together with his illustrations seem to make sense, the terminology he is using appears to be arbitrary (what else would you expect from the economist? :)
Sometimes however, he uses more technical language, e.g. simplify complexity, integration vs. modularity - then things become much clearer. In the recent interview at Davos he even talked about "common language" and "common framework" - which are exactly the words used to describe Systems Architecture.
Let's talk however for a while about the "Christensen Effect" as the former Intel CEO Andrew Grove called it. Professor Christensen asserts that there are 3 types of innovation. "Sustained innovation" is when the existing products or services are getting better and better to serve the existing customers with "newer", "bigger", "luxury" products. This is what mostly big corporations are doing. It is a natural thing, but does not generate new jobs. "Efficiency innovation" on the other hand results in producing the same goods or services with fewer resources and cheaper. This is a "low-hanging fruit" sort of things, pays back quickly, and it creates capital rather than utilizing it, while it does not create new jobs - rather it needs less people. Finally, "disruptive innovation" results in a new generation of technologies, products and services. It generates jobs, moves the progress, but requires a significant investment of capital and time (in his estimate 5-10 years).
Prof. Christensen gives a number of examples in different industries - from steel manufacturing to cars and even stock investing (Learning to Pivot). Here is one of my favorite examples.
40-s and 50-s was the time of a lamp radio - big bulky and heavy box put on the table, with a speaker, lots of handles etc, - practically a piece of furniture. I still remember from my childhood (back in the Soviet Union) how my parents were discussing buying a radio as a big deal and finally we went to the store a bought one on credit, and everybody in a family was proud of the purchase. At that time transistors have been in existence for quite a while, but they were considered impractical because they could not handle the power required by big radios. Then Sony came out with a tiny transistor radio, which could fit in the pocket and cost $2, so even a teenager could buy one. And it was okay (for a while) that the quality of sound wasn't great - it gave one unbeatable advantage - in addition to being affordable - mobility!
Christensen also almost invariably uses the example of computers. First computers were huge mainframe machines, taking entire room, costing hundreds of million dollars and requiring high level of skill to be able to operate them. Then came mini-computers, they were much smaller and much less expensive, so almost any company can have one (remember punch cards, remember Algol, remember Fortran?) Then came personal computer, and now iPhone fitting in your palm has more power then the first mainframes did. Did you notice the trend toward mobility?
Look at the telephone evolution - from turning the handle then talking to the "girl" asking her to connect you with the desired party on the other end, to rotary and dial phones, then Motorola came with the first mobile phone, 40th anniversary of which was celebrated recently - "the brick", heavy and exorbitantly expensive. Look at what we get now - light, sleek, multi- functional device easily fitting in your pocket. And again, mobility was the vector of evolution.
Let's turn now to the sustainable energy technologies (Clayton Christensen mentions in one of his interviews that he came to rarely use the word "innovation" because it used often so broadly that practically lost its meaning. For the same reason I am trying to avoid using the word "sustainable" - it is not only overused, but often abused, however for the lack of the commonly accepted alternative term I will have to use it). We know how progress in electric vehicles is slowed by the absence of light, compact and cheap batteries. The progress in developing not long ago hyped fuel cell technology is disappointing to say the least. One recent announcement by Redox Power of them having developed a compact and inexpensive fuel cell technology worth mentioning. If proves to fulfill its promise, it may become one of those disruptive technologies.
What about the Photo-Voltaic technology? Despite significant improvements in efficiency and a recent dramatic drop in price of the PV modules, they are still expensive, require huge unobstructed roof area and without ideal conditions unable to contribute anything significant into the building's energy demand (not even talking about the payback period of 15-20 years in the absence of subsidies !)
Solar thermal technologies, especially with the advent of vacuum tube collectors, due to their much higher efficiency do not require as big area as PV does. However they still require large enough space, not easily integrated into the building. And of course, the plaque of all solar technologies - they work only during the day, it has to be a sunny day and most of the energy we can collect during few hours around noon rather than in the morning or in the evening. There is a solution however. For example, we may try to use collimated light the way it is done in the SunCentral technology for day lighting. There are other technologies which nicely complement solar, like air-to-water heat exchange and high-efficiency thermal storage. Connected in the optimally configured way and controlled using adaptive algorithm, they can dramatically reduce the cost and the size of the system.
Would such composite energy technology / solution utilizing principles of Systems Architecture, providing adequate performance in a compact, easy to handle, even mobile package, herald the ascent of the next "disruptive innovation" ?
I think so.
I must admit, I got somewhat addicted to the Clayton's soft manner of speaking and often subtle humor hidden under his serious voice. Good example: Innovator's Dilemma.
The main idea he is promoting seems so obvious after you hear it, perhaps also due to the illustrative way he delivers it.
However, while it is very immersing to listen to Professor Christensen, and his ideas together with his illustrations seem to make sense, the terminology he is using appears to be arbitrary (what else would you expect from the economist? :)
Sometimes however, he uses more technical language, e.g. simplify complexity, integration vs. modularity - then things become much clearer. In the recent interview at Davos he even talked about "common language" and "common framework" - which are exactly the words used to describe Systems Architecture.
Let's talk however for a while about the "Christensen Effect" as the former Intel CEO Andrew Grove called it. Professor Christensen asserts that there are 3 types of innovation. "Sustained innovation" is when the existing products or services are getting better and better to serve the existing customers with "newer", "bigger", "luxury" products. This is what mostly big corporations are doing. It is a natural thing, but does not generate new jobs. "Efficiency innovation" on the other hand results in producing the same goods or services with fewer resources and cheaper. This is a "low-hanging fruit" sort of things, pays back quickly, and it creates capital rather than utilizing it, while it does not create new jobs - rather it needs less people. Finally, "disruptive innovation" results in a new generation of technologies, products and services. It generates jobs, moves the progress, but requires a significant investment of capital and time (in his estimate 5-10 years).
Prof. Christensen gives a number of examples in different industries - from steel manufacturing to cars and even stock investing (Learning to Pivot). Here is one of my favorite examples.
40-s and 50-s was the time of a lamp radio - big bulky and heavy box put on the table, with a speaker, lots of handles etc, - practically a piece of furniture. I still remember from my childhood (back in the Soviet Union) how my parents were discussing buying a radio as a big deal and finally we went to the store a bought one on credit, and everybody in a family was proud of the purchase. At that time transistors have been in existence for quite a while, but they were considered impractical because they could not handle the power required by big radios. Then Sony came out with a tiny transistor radio, which could fit in the pocket and cost $2, so even a teenager could buy one. And it was okay (for a while) that the quality of sound wasn't great - it gave one unbeatable advantage - in addition to being affordable - mobility!
Christensen also almost invariably uses the example of computers. First computers were huge mainframe machines, taking entire room, costing hundreds of million dollars and requiring high level of skill to be able to operate them. Then came mini-computers, they were much smaller and much less expensive, so almost any company can have one (remember punch cards, remember Algol, remember Fortran?) Then came personal computer, and now iPhone fitting in your palm has more power then the first mainframes did. Did you notice the trend toward mobility?
Look at the telephone evolution - from turning the handle then talking to the "girl" asking her to connect you with the desired party on the other end, to rotary and dial phones, then Motorola came with the first mobile phone, 40th anniversary of which was celebrated recently - "the brick", heavy and exorbitantly expensive. Look at what we get now - light, sleek, multi- functional device easily fitting in your pocket. And again, mobility was the vector of evolution.
Let's turn now to the sustainable energy technologies (Clayton Christensen mentions in one of his interviews that he came to rarely use the word "innovation" because it used often so broadly that practically lost its meaning. For the same reason I am trying to avoid using the word "sustainable" - it is not only overused, but often abused, however for the lack of the commonly accepted alternative term I will have to use it). We know how progress in electric vehicles is slowed by the absence of light, compact and cheap batteries. The progress in developing not long ago hyped fuel cell technology is disappointing to say the least. One recent announcement by Redox Power of them having developed a compact and inexpensive fuel cell technology worth mentioning. If proves to fulfill its promise, it may become one of those disruptive technologies.
What about the Photo-Voltaic technology? Despite significant improvements in efficiency and a recent dramatic drop in price of the PV modules, they are still expensive, require huge unobstructed roof area and without ideal conditions unable to contribute anything significant into the building's energy demand (not even talking about the payback period of 15-20 years in the absence of subsidies !)
Solar thermal technologies, especially with the advent of vacuum tube collectors, due to their much higher efficiency do not require as big area as PV does. However they still require large enough space, not easily integrated into the building. And of course, the plaque of all solar technologies - they work only during the day, it has to be a sunny day and most of the energy we can collect during few hours around noon rather than in the morning or in the evening. There is a solution however. For example, we may try to use collimated light the way it is done in the SunCentral technology for day lighting. There are other technologies which nicely complement solar, like air-to-water heat exchange and high-efficiency thermal storage. Connected in the optimally configured way and controlled using adaptive algorithm, they can dramatically reduce the cost and the size of the system.
Would such composite energy technology / solution utilizing principles of Systems Architecture, providing adequate performance in a compact, easy to handle, even mobile package, herald the ascent of the next "disruptive innovation" ?
I think so.
Tuesday, October 29, 2013
The History of Innovation
It is important to review history to understand how radical innovation
works with technology to transform people's perception and create new previously unexpected opportunities. In the recent post I talked about how an evolution of engine made it possible an evolution of the modern aviation. But there is another, crucial component of an aircraft, without which the machine would be useless - its control. It is what allows an airplane to ascent, change direction, stay in flight and safely land. Ability to control an airplane was in fact a real innovation of Wright brothers - and not the invention of an airplane as many may believe.
Interestingly, in a speech to the Aero Club of France, 5 November 1908 Wilbur Wright admitted: "I confess that in 1901, I said to my brother Orville that man would not fly for fifty years. . . . Ever since, I have distrusted myself and avoided all predictions". How things change! The next year already Wrights became engaged in the legal fight for establishing their priority of the first controlled flight and anything related to it. Their opponents derisively suggested that if someone jumped in the air and waved his arms, the Wrights would sue him ...
Today we are talking about the "Internet of Things" (IOT). The term was was coined in 1999,
but Mark Weiser at Xerox PARC with John Seely Brown led visionary
research in the late 1980's on what is now called the IOT and used the term "ubiquitous computing" as the third
generation of computing. Their paper, "The Computer for the 21st Century",
was published in the September 1991 issue of Scientific American. In the early 1990's,
Steelcase from Grand Rapids, Michigan built and patented several
inventions of what is now called IOT before becoming a charter founding
member of the M.I.T. consortium called "Things That Think" (TTT) created in
1995. Start-up companies such as Echelon were founded in the early
1990's to commercialize IOT technology. GE changed their business
model to a manufacturing/service model and began building products with
embedded networked "smarts" in the 1990's and recently identified the
"industrial Internet" as a $32 trillion opportunity. GM launched
OnStar(tm) in the 1990's. IBM now promotes the concept of "smart planet".
Today radical innovations are built with a new fourth generation of innovation theory and practice that replaces the linear stage model with an iterative nonlinear model. The linear model is only effective for incremental innovations within the dominant design (DD) that governs an industry or market. 4G creates a new dominant design. It was first described in the 1998 book, Fourth Generation R&D. The USA Department of Energy is now practicing these principles which is Innovation Hubs. A similar concept is utilized in the Research and Innovation Centre concept at the newest Russian University Slolkovo Tech set up by M.I.T. Economics changes with 4G to replace neoclassical and Keynesian economics with "Innovation Economics". 4G changes financial accounting to measure both tangible and intangible capital. 4G is based on capabilities which are built as people garner knowledge, tools, technologies and processes. It is a natural extension of the principles exercised by Systems Architecture.
Systems Architecture is concerned with formal tools and methods to
define the elements and their interfaces of complex, large-scale technical and non-technical systems. It helps to structure and link the
capabilities to build new technologies, organizations, business models and
infrastructures.
Interestingly, in a speech to the Aero Club of France, 5 November 1908 Wilbur Wright admitted: "I confess that in 1901, I said to my brother Orville that man would not fly for fifty years. . . . Ever since, I have distrusted myself and avoided all predictions". How things change! The next year already Wrights became engaged in the legal fight for establishing their priority of the first controlled flight and anything related to it. Their opponents derisively suggested that if someone jumped in the air and waved his arms, the Wrights would sue him ...
Today radical innovations are built with a new fourth generation of innovation theory and practice that replaces the linear stage model with an iterative nonlinear model. The linear model is only effective for incremental innovations within the dominant design (DD) that governs an industry or market. 4G creates a new dominant design. It was first described in the 1998 book, Fourth Generation R&D. The USA Department of Energy is now practicing these principles which is Innovation Hubs. A similar concept is utilized in the Research and Innovation Centre concept at the newest Russian University Slolkovo Tech set up by M.I.T. Economics changes with 4G to replace neoclassical and Keynesian economics with "Innovation Economics". 4G changes financial accounting to measure both tangible and intangible capital. 4G is based on capabilities which are built as people garner knowledge, tools, technologies and processes. It is a natural extension of the principles exercised by Systems Architecture.
Saturday, October 19, 2013
Ascent and New Mobility
A couple of interesting modular concepts came across my view recently - accidentally, from the opposite parts of the globe.
One is Coodo. Simple, elegant, functional shape, multipurpose use and modular capability attracted my attention.
The Slovenian designers claim the module can withstand from -40C to +50C. I would be interested to know how they are going to provide that given large - although unarguably attractive - window surface area. I would like to know also what source of energy they use to maintain comfort and allow all modern conveniences from shower to TV and internet. I hope it is not a diesel engine and not the propane.
Ascent Systems offers a solution - compact and "green". It's Aero-Solar technology uses solar thermal energy as the main source with high-capacity ultra-compact thermal storage and super-efficient thermal booster. It reduces energy consumption by up to 80%, and because it requires very little electricity to operate, it can also be provided by solar PV modules integrated into the system making completely autonomous.
Another interesting concept is Romotow. This is a sleek foldable trailer, designed by the New Zeeland team from W2 Limited.
Moreover, I can picture it being used to house a packaged Aero-Solar configuration to provide energy for remote locations with no access to the grid or other sources of energy, like for example heli skiing camps.
And why not to attach it to electric tow vehicle and make the full thing totally "green" ? Connecting technologies in action !
One is Coodo. Simple, elegant, functional shape, multipurpose use and modular capability attracted my attention.
The Slovenian designers claim the module can withstand from -40C to +50C. I would be interested to know how they are going to provide that given large - although unarguably attractive - window surface area. I would like to know also what source of energy they use to maintain comfort and allow all modern conveniences from shower to TV and internet. I hope it is not a diesel engine and not the propane.
Ascent Systems offers a solution - compact and "green". It's Aero-Solar technology uses solar thermal energy as the main source with high-capacity ultra-compact thermal storage and super-efficient thermal booster. It reduces energy consumption by up to 80%, and because it requires very little electricity to operate, it can also be provided by solar PV modules integrated into the system making completely autonomous.
Another interesting concept is Romotow. This is a sleek foldable trailer, designed by the New Zeeland team from W2 Limited.
It can definitely benefit from utilizing the Ascent's technology to provide for its energy needs.
Moreover, I can picture it being used to house a packaged Aero-Solar configuration to provide energy for remote locations with no access to the grid or other sources of energy, like for example heli skiing camps.
And why not to attach it to electric tow vehicle and make the full thing totally "green" ? Connecting technologies in action !
Tuesday, October 8, 2013
Connecting Technologies - Part 2: Aero-Solar System
In the previous post we talked about how connecting technologies can create a new quality on the example of a turbofan engine which made possible modern commercial aviation.
I dare to claim that the same is just about going to happen with integrated building energy technology.
Let's look for example at today's solar thermal systems. Electromagnetic energy of solar radiation is captured by solar collectors, transformed into thermal energy of the heated core, and passed to the domestic hot water system via heat exchanger. The pump circulates the water in the system passing this thermal energy to the storage.
It would be nice if the sun would never set and we could collect its "free" energy 24 hours a day. As we all know, this is not the case. The stubborn sun doesn't want to shine all day. Worse even, it circles across the sky, it rises and sets at different time in winter and summer, not even talking about clouds and other nature phenomena, making it not possible to consistently collect energy during the day. At the best, we can count on taking the most of it but for one brief moment when the sun is directly above the collector (and even then, only if we tilt it at the right angle - which is a subject of a separate discussion). Therefore the energy we can in theory collect during the day would have a profile something like the one shown here in dotted line. In practice, due to the delayed heat loss it might be something like a solid orange line.
However, what if the actual demand for hot water is not when we have the most energy available?
And indeed the typical household hot water demand is almost inverted to the energy profile. In the morning we wake up, take a shower, make breakfast and go to work. There is a morning spike in hot water consumption. We usually not home during the day - at work, at school and other business - thus the consumption through mid-day is near-zero. Then we come home, we have a dinner, we turn on the dishwasher machine, we do laundry, take a bath - another, larger spike in hot water consumption.
Comparing both graphs, one can easily see that peaks of one correspond to the troughs of another! Classical economics question: how to match supply and demand?
One solution is to accumulate the excess energy when we have the peak of its availability and use it when we don't have enough, say in the evening. Sure, a hot water storage tank may do the work. This is what is used in almost all domestic hot water systems, regardless of the source of energy, either conventional gas or electric heater or alternative, like solar or geothermal. Speaking of the latter, this can be a consistent source of energy (shown in the solid black line on the previous graph), since the temperature under ground does not depend on the sun position or the weather - a couple of meters down from the surface it is practically constant throughout the year. The problem is in order to obtain enough energy, it requires drilling, which is often not possible in the urban conditions, sometimes not allowed for environmental reasons and is always expensive.
Back to solar. Many locations in Canada enjoy abundance of solar radiation, especially in summer.
Solar thermal system, on average, is the most cost-effective renewable system. It provides much higher efficiency than solar photovoltaic systems (the latter involves conversion of solar radiation into electrical energy, and any conversion means a loss of efficiency). It would not come as a surprise however, that due to shorter days and lower sun, the energy coming from sun in winter is much less than in summer. If we want to match the summer peak demand, an example of this kind of energy balance over 12 months might look something like the graph below.
But we also want to take shower in winter - may be even more than in summer! How we should always match the demand? Of course, we can add more solar collectors, increasing the total energy collected to the point when we match the peak demand in the worst winter time. Then the energy balance may look something like on the next graph.
One can easily see that even if we manage to match the peak demand for any given moment, for the most part we collect much more energy than we are able to use, so we will need to dissipate (dump) it. Overcapacity is not a very sustainable way of building sustainable system, is it? Not even talking about additional cost for extra-collectors and, in most cases, simply impossibility to allocate a large enough area which would be required to accommodate that many collectors.
We want to size the solar thermal system in such a way so to collect energy when we have an excess of it comparing to the demand to be stored and used when there will be lack (or absence) of the usable solar radiation. But how to choose the right storage capacity?
A high-capacity thermal storage capable to accumulate large amount of thermal energy and be efficient enough not to loose it quickly, may require a large super insulated tank, ideally underground, which may be impractical or otherwise expensive.
The answer comes again in the form of integrated system. Air-to-water heat pumps (a.k.a. active thermal exchange) are becoming more and more efficient reaching COP 5-6 and more, meaning they can generate 5-6 times more energy than they consume. Combined with the solar thermal collector and high-capacity and low-loss thermal storage, controlled by the program using adaptive algorithm, this integrated system can satisfy demand at any time cost effectively, with high reliability, requiring little physical space, and using no fossil fuel.
Ascent Systems Technologies, with support from National Research Council of Canada, has developed a program called ASPA (Aero-Solar Predictive Algorithm). It does exactly that - automatically chooses the most optimal parameters of the integrated hydronic system given the location and the actual demand.
Friday, October 4, 2013
Connecting Technologies case
Connecting different technologies can create a new synergetic quality.
One notable example from the area which is familiar to me. The advent of aviation began with the first brave attempts of winged flight.
But only the connecting these flying machines with a propeller attached to a piston engine and finding a way to control it made it possible its evolution to the point of aviation becoming widely used both in commercial and military applications.
By the end of World War II however the piston-engine propeller driven aircraft have reached their limit of efficiency and hit the speed barrier.
From the ancient times people knew there was another way of propelling the craft, which would allow reach practically any speed - rocket flight. It had one significant flow though - the thirsty rocket would quickly run out of fuel, limiting its application and preventing from commercial use.
Neither propeller-driven nor rocket-driven aircraft could do more without a breakthrough.
New breakthrough came n the form of integration of both in one - turbofan engine! Exhaust gas from the jet is used to rotate the turbine, and the fan coaxial with turbine compresses the air improving the combustion. A portion of the incoming air goes into the combustion chamber. The remainder passes through a fan, or low-pressure compressor, and is ejected directly as a "cold" jet or mixed with the gas-generator exhaust to produce a "hot" jet. The objective of this sort of bypass system is to increase thrust without increasing fuel consumption.
Thus the integration of a propeller (fan) and a jet produced a completely new quality - the engine which is both fuel efficient and allows to achieve much higher speed. Most of commercial aircraft today use turbofan engines.
One notable example from the area which is familiar to me. The advent of aviation began with the first brave attempts of winged flight.
But only the connecting these flying machines with a propeller attached to a piston engine and finding a way to control it made it possible its evolution to the point of aviation becoming widely used both in commercial and military applications.
By the end of World War II however the piston-engine propeller driven aircraft have reached their limit of efficiency and hit the speed barrier.
From the ancient times people knew there was another way of propelling the craft, which would allow reach practically any speed - rocket flight. It had one significant flow though - the thirsty rocket would quickly run out of fuel, limiting its application and preventing from commercial use.
New breakthrough came n the form of integration of both in one - turbofan engine! Exhaust gas from the jet is used to rotate the turbine, and the fan coaxial with turbine compresses the air improving the combustion. A portion of the incoming air goes into the combustion chamber. The remainder passes through a fan, or low-pressure compressor, and is ejected directly as a "cold" jet or mixed with the gas-generator exhaust to produce a "hot" jet. The objective of this sort of bypass system is to increase thrust without increasing fuel consumption.
Thus the integration of a propeller (fan) and a jet produced a completely new quality - the engine which is both fuel efficient and allows to achieve much higher speed. Most of commercial aircraft today use turbofan engines.
Friday, September 20, 2013
90 Percent Rule
I read a news about Tesla Working On 90% Autonomous Car.
It attracted my attention not only because Tesla makes very nice cars, and I also write about future mobility.
I liked Elon Musk's rule of 90 percent. It is very close to my rule which I apply to Performance Homes and integrated energy technology. I insist that we should strive to achieve 90% of efficiency. This is the most cost-effective target. If we want to move from 90% to 99%, we need to be prepared to spend the same amount of effort on top of it, i.e. double it. And what about the 100% efficiency (so called "net-zero" approach) ? You will likely need to triple or even quadruple the effort.
It attracted my attention not only because Tesla makes very nice cars, and I also write about future mobility.
I liked Elon Musk's rule of 90 percent. It is very close to my rule which I apply to Performance Homes and integrated energy technology. I insist that we should strive to achieve 90% of efficiency. This is the most cost-effective target. If we want to move from 90% to 99%, we need to be prepared to spend the same amount of effort on top of it, i.e. double it. And what about the 100% efficiency (so called "net-zero" approach) ? You will likely need to triple or even quadruple the effort.
Sunday, September 15, 2013
Performance Home, Part 2
Picture a hole in the side of your house
that’s just as a big as a typical computer screen. Imagine the
wind blowing through that hole. The hole is real. If you were to combine all the cracks and crannies
in a typical Canadian home, they’d add up to almost 1,400 square
centimetres, roughly the size of 2.5 magazine pages.
Plugging that hole is the simplest way for Canada to save energy. Plugging the hole also saves money, creates jobs, cuts greenhouse-gas emissions and makes our homes more comfortable.
Yet despite the fact that buildings account for roughly one-third of our national energy consumption and the fact that we’re world leaders in small building energy-conservation technology, Canadians still haven’t plugged the hole. Most of our existing homes remain quite drafty, and most of our new homes fail to meet decades-old efficiency standards.
Builders have long known that heat claims the lion’s share of the energy consumed in Canadian homes: 57% of the total, compared with 24% for hot water, 13% for appliances and 5% for lighting. They’ve also known that heat escapes wherever air escapes, mostly under doors and around windows.
A standard measurement agreed upon is: the number of times per hour the blowerdoor fan would suck all the air out of a house at a prescribed pressure of 50 pascals (Pa). The metric is called “air changes per hour (ACH)” at 50 Pa. With gaps totaling 1,400 sq. cm, the average Canadian home leaks enough air to result in 6.85 ACH@50Pa.
In the wake of the 1973 Arab oil embargo, the Saskatchewan Research Council designed an energy-efficient home appropriate for the Saskatchewan winter. The oil crisis prompted many similar projects, with most focusing on new ways to trap solar heat within a more or less standard building. The Saskatchewan team elected, instead, to design a radically more efficient building envelope. The Saskatchewan Conservation House, completed in Regina in 1977, was likely one of the first buildings to combine three key elements: superinsulation, extreme airtightness and a heat-recovery ventilator.
In an era when nearly all houses were constructed of four-inch-thick walls filled with R-8 insulation, the two-storey Saskatchewan house featured 12-inch-thick R-40 walls and R-60 roof insulation. Likewise, single-paned windows were then the norm; this home had triple-glazed windows. The house also boasted extreme airtightness. Most new houses at the time scored in the range of 9 ACH@50Pa; the SCH achieved 0.8 ACH@50Pa. At the time it was likely the tightest house in the world.
To provide fresh air to the airtight house, the Saskatchewan team built an air-to-air heat exchanger. This device pulled in fresh (but cold) outdoor air through a series of baffles. Stale (but warm) indoor air was pushed out through the other side of those same baffles, and heat was transferred from the exhaust air to the incoming fresh air.
The SCH had no furnace. Instead, it relied on a system that collected solar heat during the day, stored it in a water tank, then released the heat at night. All told, the house required less than a quarter of the energy consumed by a standard home of the time.
That same year, the “House As a System” approach pioneered in Saskatchewan formed the basis for a new national building standard that required R-20 insulation, blower-door test results of 1.5 ACH@50Pa or better, the installation of a heat-recovery ventilator and the use of non-toxic materials. The new standard became a partnership between NRCan and the Canadian Home Builders’ Association. It was the toughest standard in the world at that time and presaged by decades the advent of green building initiatives such as BuiltGreen or LEED (Leadership in Energy and Environmental Design). The new standard was voluntary, but its authors intended for its gradual integration into the national building code. With their sights set on plugging the hole in Canadian homes by the turn of the century, they named the new standard “R-2000.”
The above content is mostly a shortened re-print of the article High-Performance Homes - Why isn't Canada spearheading the movement to build more sustainable homes? published in June 2012 issue of Canadian Geographic.
Plugging that hole is the simplest way for Canada to save energy. Plugging the hole also saves money, creates jobs, cuts greenhouse-gas emissions and makes our homes more comfortable.
We know how to find the hole. Canadians pioneered the use of a tool
that can measure the airtightness of a building. Natural Resources
Canada (NRCan) has used this “blower door” to test more than 800,000
Canadian homes.
Canadians also know how to fix the hole. Way back in 1977, they built a house
so airtight and so well insulated that a hair dryer could have kept it
warm through the winter – in cold Saskatchewan.Yet despite the fact that buildings account for roughly one-third of our national energy consumption and the fact that we’re world leaders in small building energy-conservation technology, Canadians still haven’t plugged the hole. Most of our existing homes remain quite drafty, and most of our new homes fail to meet decades-old efficiency standards.
Builders have long known that heat claims the lion’s share of the energy consumed in Canadian homes: 57% of the total, compared with 24% for hot water, 13% for appliances and 5% for lighting. They’ve also known that heat escapes wherever air escapes, mostly under doors and around windows.
A standard measurement agreed upon is: the number of times per hour the blowerdoor fan would suck all the air out of a house at a prescribed pressure of 50 pascals (Pa). The metric is called “air changes per hour (ACH)” at 50 Pa. With gaps totaling 1,400 sq. cm, the average Canadian home leaks enough air to result in 6.85 ACH@50Pa.
In the wake of the 1973 Arab oil embargo, the Saskatchewan Research Council designed an energy-efficient home appropriate for the Saskatchewan winter. The oil crisis prompted many similar projects, with most focusing on new ways to trap solar heat within a more or less standard building. The Saskatchewan team elected, instead, to design a radically more efficient building envelope. The Saskatchewan Conservation House, completed in Regina in 1977, was likely one of the first buildings to combine three key elements: superinsulation, extreme airtightness and a heat-recovery ventilator.
In an era when nearly all houses were constructed of four-inch-thick walls filled with R-8 insulation, the two-storey Saskatchewan house featured 12-inch-thick R-40 walls and R-60 roof insulation. Likewise, single-paned windows were then the norm; this home had triple-glazed windows. The house also boasted extreme airtightness. Most new houses at the time scored in the range of 9 ACH@50Pa; the SCH achieved 0.8 ACH@50Pa. At the time it was likely the tightest house in the world.
To provide fresh air to the airtight house, the Saskatchewan team built an air-to-air heat exchanger. This device pulled in fresh (but cold) outdoor air through a series of baffles. Stale (but warm) indoor air was pushed out through the other side of those same baffles, and heat was transferred from the exhaust air to the incoming fresh air.
The SCH had no furnace. Instead, it relied on a system that collected solar heat during the day, stored it in a water tank, then released the heat at night. All told, the house required less than a quarter of the energy consumed by a standard home of the time.
That same year, the “House As a System” approach pioneered in Saskatchewan formed the basis for a new national building standard that required R-20 insulation, blower-door test results of 1.5 ACH@50Pa or better, the installation of a heat-recovery ventilator and the use of non-toxic materials. The new standard became a partnership between NRCan and the Canadian Home Builders’ Association. It was the toughest standard in the world at that time and presaged by decades the advent of green building initiatives such as BuiltGreen or LEED (Leadership in Energy and Environmental Design). The new standard was voluntary, but its authors intended for its gradual integration into the national building code. With their sights set on plugging the hole in Canadian homes by the turn of the century, they named the new standard “R-2000.”
The above content is mostly a shortened re-print of the article High-Performance Homes - Why isn't Canada spearheading the movement to build more sustainable homes? published in June 2012 issue of Canadian Geographic.
Friday, September 13, 2013
Performance Home, Part 1
Those who have been following my postings undoubtedly noticed that I am trying to avoid using such terms like "sustainable", "green" or even "eco-friendly". Instead I prefer talking about the "performance home" or "performance building" (sometimes also called "high-performance building"). It is probably time finally to find out what is a Performance Home.
Is it a something like a sphere, which everybody familiar with thermodynamics knows is the most efficient shape?
Is it a shiny futuristic glass cube, which is usually much more functional ?
Is it a house stuffed with all sorts of techno-gadgets?
Or, it is a some sort of combination? May be, but not necessarily. One has to look at the performance house as a system, which it undoubtedly is. Then one must recognize that it - as any system - consists from many smaller systems (structure, insulation, ventilation, heating and cooling, water supply, electrical etc) constituting the whole, and in turn is a part of a bigger urban system, community, natural environment etc. Then it will be obvious that performance home to be built in Sun Peaks Resort on elevation of 2,400 m would be totally different from the one to be built in Saudi Arabia.
Is it a something like a sphere, which everybody familiar with thermodynamics knows is the most efficient shape?
Is it a shiny futuristic glass cube, which is usually much more functional ?
Is it a house stuffed with all sorts of techno-gadgets?
Or, it is a some sort of combination? May be, but not necessarily. One has to look at the performance house as a system, which it undoubtedly is. Then one must recognize that it - as any system - consists from many smaller systems (structure, insulation, ventilation, heating and cooling, water supply, electrical etc) constituting the whole, and in turn is a part of a bigger urban system, community, natural environment etc. Then it will be obvious that performance home to be built in Sun Peaks Resort on elevation of 2,400 m would be totally different from the one to be built in Saudi Arabia.
Saturday, September 7, 2013
Future Mobility
Inspired by the Insecta urban car concept presented by Moovee Innovations at UBC
and following some of my previous posts on high-speed rail, as well as on future marine and air mobility I decided to descend back to Earth and check out what is in store for the near future urban commuters. I discovered - no surprise - that the work has been done for me by Inspiration Green.
Over the past few years a serious buzz has built over the electric car. The high-profile marketing and release of the Chevy Volt and the Nissan Leaf in the United States has prompted much of this attention while the mainstream press in Canada and the United States has been scrutinizing these products in reports and editorials. Every car show around the world is featuring electric vehicles, and it seems that they could become the next big thing in personal mobility.
Electric propulsion of automobiles has been around for over 100 years, so why are electric vehicles only now making a serious run at the buying public?
One answer is that automobiles have become one of the most pervasive symbols of the fossil fuel economy that is devastating the natural environment. Cars have therefore become a regular focal point in environmental debates about “what is to be done” about green house gas emissions and climate change, issues that have now entered into mainstream consciousness.
A sense of urgency exists that action needs be taken by individuals, institutions and corporations in order to curb emissions. This has created a system where material objects are either perceived as friendly to the environment or damaging. The automobile industry is racing to capitalize on this notion with electric cars leading the way as the number one “green” solution.
In the United States, the confidence in electric cars shown by the traditional auto manufacturers is driven in part by President Barack Obama’s plan to help build the clean energy economy, which is seen by the administration as a key to the country’s competitiveness in the 21st century. The U.S. government has already invested US$5 billion to stimulate an industry and market for electric cars.
Ottawa, on the other hand, has yet to earmark significant funds to this industry, thereby leaving the country in a chicken-and-egg situation: without the government funds to foster an electric car industry and stimulate a market plus help develop the infrastructure to serve it [e.g. charging infrastructure], electric vehicles may not emerge as a viable option. In the event that electric cars come into general use, Canada’s well-established automotive sector — a major employer — could be adversely impacted if not properly prepared. More action by the Federal government to support this sector will be needed, or Canada could be left behind other auto-producing centres.
But should Canadian tax money be used to stimulate a burgeoning electric car industry, or would government funds be more successful in reducing emissions if they went to developing more accessible public transportation or create more and safer bicycle lanes like it has been done in Vancouver?
At first glance, zero emissions cars look like real solutions to stopping growing carbon emissions in a society and culture that is obsessed with cars as a prime form of transport. At this point we really do not have a lot of choice. In reality, however, the answer is much more complex and depends on whether one explores the collective versus the individual benefits of this technology, or, in other words, what overall impact the modest adoption of electric vehicles will have on cutting tail pipe emissions.
Also, depending on where you charge you electric vehicle, pollution could simply move upstream from the tailpipe to the coal-fired power generator. Another factor is that large amounts of lithium will be needed to power electric motors representing a horizontal shift in reliance from one extractive industry, oil, to another — lithium. And simply “greening” the automobile will do nothing to curb the destruction caused by roads, parking lots and traffic congestion.
Alone they are not a real solution, but viewed as part of an overall national strategy, the electric car, together with high-performance buildings and evolutionary transition to renewable sources of energy, could play a pivotal role in weaning society off of its reliance on fossil fuels.
Aptera electric car
and following some of my previous posts on high-speed rail, as well as on future marine and air mobility I decided to descend back to Earth and check out what is in store for the near future urban commuters. I discovered - no surprise - that the work has been done for me by Inspiration Green.
Over the past few years a serious buzz has built over the electric car. The high-profile marketing and release of the Chevy Volt and the Nissan Leaf in the United States has prompted much of this attention while the mainstream press in Canada and the United States has been scrutinizing these products in reports and editorials. Every car show around the world is featuring electric vehicles, and it seems that they could become the next big thing in personal mobility.
Electric propulsion of automobiles has been around for over 100 years, so why are electric vehicles only now making a serious run at the buying public?
One answer is that automobiles have become one of the most pervasive symbols of the fossil fuel economy that is devastating the natural environment. Cars have therefore become a regular focal point in environmental debates about “what is to be done” about green house gas emissions and climate change, issues that have now entered into mainstream consciousness.
A sense of urgency exists that action needs be taken by individuals, institutions and corporations in order to curb emissions. This has created a system where material objects are either perceived as friendly to the environment or damaging. The automobile industry is racing to capitalize on this notion with electric cars leading the way as the number one “green” solution.
In the United States, the confidence in electric cars shown by the traditional auto manufacturers is driven in part by President Barack Obama’s plan to help build the clean energy economy, which is seen by the administration as a key to the country’s competitiveness in the 21st century. The U.S. government has already invested US$5 billion to stimulate an industry and market for electric cars.
Ottawa, on the other hand, has yet to earmark significant funds to this industry, thereby leaving the country in a chicken-and-egg situation: without the government funds to foster an electric car industry and stimulate a market plus help develop the infrastructure to serve it [e.g. charging infrastructure], electric vehicles may not emerge as a viable option. In the event that electric cars come into general use, Canada’s well-established automotive sector — a major employer — could be adversely impacted if not properly prepared. More action by the Federal government to support this sector will be needed, or Canada could be left behind other auto-producing centres.
But should Canadian tax money be used to stimulate a burgeoning electric car industry, or would government funds be more successful in reducing emissions if they went to developing more accessible public transportation or create more and safer bicycle lanes like it has been done in Vancouver?
At first glance, zero emissions cars look like real solutions to stopping growing carbon emissions in a society and culture that is obsessed with cars as a prime form of transport. At this point we really do not have a lot of choice. In reality, however, the answer is much more complex and depends on whether one explores the collective versus the individual benefits of this technology, or, in other words, what overall impact the modest adoption of electric vehicles will have on cutting tail pipe emissions.
Also, depending on where you charge you electric vehicle, pollution could simply move upstream from the tailpipe to the coal-fired power generator. Another factor is that large amounts of lithium will be needed to power electric motors representing a horizontal shift in reliance from one extractive industry, oil, to another — lithium. And simply “greening” the automobile will do nothing to curb the destruction caused by roads, parking lots and traffic congestion.
Alone they are not a real solution, but viewed as part of an overall national strategy, the electric car, together with high-performance buildings and evolutionary transition to renewable sources of energy, could play a pivotal role in weaning society off of its reliance on fossil fuels.
Aptera electric car
Thursday, August 29, 2013
Electric Aircraft - Part 2. Solar
SolarWorld e-One
Specifications:
MTOW | 300 kg |
empty weight (without batteries) |
100 kg |
battery weight | 100 kg |
payload | 100 kg |
wing span | 13 m |
wing surface | 10 m |
max. engine power | 16 kW |
max. range | up to 1,000 km |
max. endurance | more than 8 hours |
cruise | 140 km/h |
aspect ratio | 16.9 |
best glide ratio | 33 |
certification | Ultralight class Germany (LTF-UL) |
The solar electric plane generates neither CO2 nor other emissions, producing also no sound pollution. This may be the beginning of a new era in aviation history – and along with electric cars and electric boats (see Solar Ship) is an important step towards an emission-free future of mobility.
While on the ground, the lithium-ion battery is charged of the solar hangar.
Wednesday, August 28, 2013
Electric Aircraft
German firm PC-Aero introduced the first production all-electric aircraft Electra One. It belongs to the class of ultra-light vehicles and incorporates a number of technical innovations:
- modern composite glass-/carbon-structure
- advanced aerodynamic design
- best propeller efficiency (90%)
- light lithium-ion batteries
- highly efficient electric drive
- 13,5 kW- (continuous) brushless electric engine
- more than three hours flight time
- over 400 km range
- cruise at 160 km/h
- zero CO2-emission (using a Solar Hangar for charging)
- very low noise level (under 50 dB) (propeller speed for cruising at 1400 RPM)
- operating costs below 35 €/hour and 0,2 €/km
Specifications:
|
|||
Friday, August 16, 2013
Internet of Things and other things
In the article The Cognitive Net Is Coming published in the recent IEEE Spectrum, the author (Antonio Liotta from the Eindhoven University of Technology) states:
Perhaps as early as the end of this decade, our refrigerators will e-mail us grocery lists. Our doctors will update our prescriptions using data beamed from tiny monitors attached to our bodies. And our alarm clocks will tell our curtains when to open and our coffeemakers when to start the morning brew. (I would say, coffeemaker and curtains should be smart enough - if they are not yet - to know when to start brewing or when to open. Rather than be hard programmed by time, they should sense when these functions are needed).
By 2020, according to forecasts from Cisco Systems, the global Internet will consist of 50 billion connected tags, televisions, cars, kitchen appliances, surveillance cameras, smartphones, utility meters, and what not. This is the Internet of Things, and what an idyllic concept it is.
But here’s the harsh reality he says: Without a radical overhaul to its underpinnings, such a massive, variable network will likely create more problems than it proposes to solve. The reason? Today’s Internet just isn’t equipped to manage the kind of traffic that billions more nodes and diverse applications will surely bring.
Then the author proceeds to devise a more intelligent (cognitive) Internet protocol which would presumably solve the problems of today's global networks. Not going into the details of the cognitive protocol we may find some useful (even though not necessarily totally new) idea to endow every connected computer/processor equipped device with the ability to route data. Given the computational capabilities of today’s consumer devices, there’s no reason for neighboring smart gadgets to communicate over the core network. They could instead use any available wireless technology, such as Wi-Fi or Bluetooth, to spontaneously form “mesh networks.” This would make it possible for any terminal that taps into the access network—tablet, television, thermostat, tractor, toaster, toothbrush, you name it—to relay data packets on behalf of any other terminal.
By off-loading local traffic from the Internet, mesh networks would free up bandwidth for long-distance services, such as IPTV, that would otherwise require costly infrastructure upgrades. These networks would also add routing pathways that bypass bottlenecks.
To handle data and terminals of many different kinds, it is suggested that the routers (including the terminals themselves) use methods for building and selecting data pathways borrowed from a complex network that already exists in nature: the human autonomic nervous system.
Comparing a complex system to a human body is a popular metaphor. The human body system controls breathing, digestion, blood circulation, body heat, the killing of pathogens, and many other bodily functions. It does all of this, as the name suggests, autonomously—without our direction or even our awareness. Most crucially, the autonomic nervous system can detect disturbances and make adjustments before these disruptions turn into life-threatening problems.
In fact, the parts of the brain that control this process rely on a multitude of inputs from many subsystems, including taste, smell, memory, blood flow, hormone levels, muscle activity, and immune responses. Does the food contain harmful bacteria that must be killed or purged? Does the body need to conserve blood and fuel for more important tasks, such as running from an enemy? By coordinating many different organs and functions at once, the autonomic system keeps the body running smoothly.
By contrast, many current systems (the Internet included) address a disturbance, such as a spike in traffic or a failed node, only after it starts causing trouble. Routers, servers, and computer terminals all try to fix the problem separately, rather than work together. This often just makes the problem worse rather than to fix it.
One idea, proposed by IBM, is the Monitor-Analyze-Plan-Execute (MAPE) loop, or more simply, the knowledge cycle. Algorithms that follow this architecture must perform four main tasks:
First, they monitor a router’s environment, such as its battery level, its memory capacity, the type of traffic it’s seeing, the number of nodes it’s connected to, and the bandwidth of those connections.
Then the knowledge algorithms analyze all that data. They use statistical techniques to determine whether the inputs are typical and, if they aren’t, whether the router can handle them.
Next, they plan a response to any potential problem, such as an incoming video stream that’s too large. For instance, they may figure the best plan is to ask the video server to lower the stream’s bit rate. Or they may find it’s better to break up the stream and work with other nodes to spread the data over many different pathways.
Lastly, they execute the plan. The execution commands may modify the routing tables, tweak the queuing methods, reduce transmission power, or select a different transmission channel, among many possible actions.
An integrated building energy technology advanced system architecture (ASPA) can utilize some of these principles above. For example, above mentioned curtains can react on the level of light and open when needed, but also close when the temperature in the room is raising due to excessive sunlight approaching the limit when air-conditioning needs to be turned on. In the hydronic systems, why do we need to maintain the maximum level of the temperature in the storage tank during the period of low or no demand for hot water - only to waste energy to the heat loss? Instead, the system can monitor the pattern of daily use and predict the demand, reducing the energy use to the minimum. Outside temperature follows the pattern but can be of course a subject of significant variations. A combination of knowledge-based algorithm with a feedback loop including adaptive capability would control the air circulation. An irradiation sensor can tell the solar thermal collector if it needs to adjust the flow rate in the system in order to either increase efficiency or prevent overheating.
An evolution of this system architecture, Advanced Sustainable Control Energy Network Technology, will be key to keeping the system in check. Not only will it help prevent individual components from failing, but by monitoring data from neighboring nodes and relaying commands, it will also create feedback loops within the local network. In turn, these local loops swap information with other local networks, thereby propagating useful information across the Sustainable Network.
Perhaps as early as the end of this decade, our refrigerators will e-mail us grocery lists. Our doctors will update our prescriptions using data beamed from tiny monitors attached to our bodies. And our alarm clocks will tell our curtains when to open and our coffeemakers when to start the morning brew. (I would say, coffeemaker and curtains should be smart enough - if they are not yet - to know when to start brewing or when to open. Rather than be hard programmed by time, they should sense when these functions are needed).
By 2020, according to forecasts from Cisco Systems, the global Internet will consist of 50 billion connected tags, televisions, cars, kitchen appliances, surveillance cameras, smartphones, utility meters, and what not. This is the Internet of Things, and what an idyllic concept it is.
But here’s the harsh reality he says: Without a radical overhaul to its underpinnings, such a massive, variable network will likely create more problems than it proposes to solve. The reason? Today’s Internet just isn’t equipped to manage the kind of traffic that billions more nodes and diverse applications will surely bring.
Then the author proceeds to devise a more intelligent (cognitive) Internet protocol which would presumably solve the problems of today's global networks. Not going into the details of the cognitive protocol we may find some useful (even though not necessarily totally new) idea to endow every connected computer/processor equipped device with the ability to route data. Given the computational capabilities of today’s consumer devices, there’s no reason for neighboring smart gadgets to communicate over the core network. They could instead use any available wireless technology, such as Wi-Fi or Bluetooth, to spontaneously form “mesh networks.” This would make it possible for any terminal that taps into the access network—tablet, television, thermostat, tractor, toaster, toothbrush, you name it—to relay data packets on behalf of any other terminal.
By off-loading local traffic from the Internet, mesh networks would free up bandwidth for long-distance services, such as IPTV, that would otherwise require costly infrastructure upgrades. These networks would also add routing pathways that bypass bottlenecks.
To handle data and terminals of many different kinds, it is suggested that the routers (including the terminals themselves) use methods for building and selecting data pathways borrowed from a complex network that already exists in nature: the human autonomic nervous system.
Comparing a complex system to a human body is a popular metaphor. The human body system controls breathing, digestion, blood circulation, body heat, the killing of pathogens, and many other bodily functions. It does all of this, as the name suggests, autonomously—without our direction or even our awareness. Most crucially, the autonomic nervous system can detect disturbances and make adjustments before these disruptions turn into life-threatening problems.
In fact, the parts of the brain that control this process rely on a multitude of inputs from many subsystems, including taste, smell, memory, blood flow, hormone levels, muscle activity, and immune responses. Does the food contain harmful bacteria that must be killed or purged? Does the body need to conserve blood and fuel for more important tasks, such as running from an enemy? By coordinating many different organs and functions at once, the autonomic system keeps the body running smoothly.
By contrast, many current systems (the Internet included) address a disturbance, such as a spike in traffic or a failed node, only after it starts causing trouble. Routers, servers, and computer terminals all try to fix the problem separately, rather than work together. This often just makes the problem worse rather than to fix it.
One idea, proposed by IBM, is the Monitor-Analyze-Plan-Execute (MAPE) loop, or more simply, the knowledge cycle. Algorithms that follow this architecture must perform four main tasks:
First, they monitor a router’s environment, such as its battery level, its memory capacity, the type of traffic it’s seeing, the number of nodes it’s connected to, and the bandwidth of those connections.
Then the knowledge algorithms analyze all that data. They use statistical techniques to determine whether the inputs are typical and, if they aren’t, whether the router can handle them.
Next, they plan a response to any potential problem, such as an incoming video stream that’s too large. For instance, they may figure the best plan is to ask the video server to lower the stream’s bit rate. Or they may find it’s better to break up the stream and work with other nodes to spread the data over many different pathways.
Lastly, they execute the plan. The execution commands may modify the routing tables, tweak the queuing methods, reduce transmission power, or select a different transmission channel, among many possible actions.
An integrated building energy technology advanced system architecture (ASPA) can utilize some of these principles above. For example, above mentioned curtains can react on the level of light and open when needed, but also close when the temperature in the room is raising due to excessive sunlight approaching the limit when air-conditioning needs to be turned on. In the hydronic systems, why do we need to maintain the maximum level of the temperature in the storage tank during the period of low or no demand for hot water - only to waste energy to the heat loss? Instead, the system can monitor the pattern of daily use and predict the demand, reducing the energy use to the minimum. Outside temperature follows the pattern but can be of course a subject of significant variations. A combination of knowledge-based algorithm with a feedback loop including adaptive capability would control the air circulation. An irradiation sensor can tell the solar thermal collector if it needs to adjust the flow rate in the system in order to either increase efficiency or prevent overheating.
An evolution of this system architecture, Advanced Sustainable Control Energy Network Technology, will be key to keeping the system in check. Not only will it help prevent individual components from failing, but by monitoring data from neighboring nodes and relaying commands, it will also create feedback loops within the local network. In turn, these local loops swap information with other local networks, thereby propagating useful information across the Sustainable Network.
Saturday, August 3, 2013
Solar Ship
Named Tûranor PlanetSolar, which means power of the sun in J.R.R Tolkien mythology, this unique vessel is powered and propelled exclusively by solar energy!
The length of the boat is 31 m
The width is 15 m
The height is 6.30 m
Draft 1.55 m
Average speed 5 knots
Installed solar power 93.5 kW
Average engine consumption 20 kW
Surface area of PV modules 516 s.m.
Module efficiency 18.8%
PlanetSolar Deepwater is a scientific expedition by the University of Geneva on the route of the Gulf Stream
More amazing images: Planet Solar Gallery
http://www.planetsolar.org/
Friday, July 19, 2013
E-Infrastructure is a Global Trend
The Skolkovo Institute of Science and Technology (Skoltech) in Moscow, Russian Federation and the National Association of Research and Educational e-Infrastructures (e-ARENA) announced the first steps in their partnership to improve national and international e-infrastructure.
On June 26, Skoltech President Edward Crawley and e-ARENA Director-General Marat Biktimirov signed a Letter of Intent to collaborate in establishing permanent high bandwidth networking between Skoltech and its national and international partners and collaborators.
Skoltech has recently launched its Center for Stem Cell Research that includes a close long-term partnership with academic partners in the Netherlands, Russia and the USA. Researchers at the Center will rely on state-of-the-art e-infrastructure to advance research programs in the application of new genomics technologies towards the realization of personalized medicine. Skoltech and its international collaborators will need access to an exponentially growing amount of genomic data. This data must then be stored, transferred and analyzed, demanding a high level of network speed between Skoltech and the rest of the world. The Institute also expects to launch at least 14 more CREIs each of which will include international and national collaborations target complex and data-rich scientific challenges.
Besides these research projects, Skoltech will develop opportunities for web-based classes and data-intensive collaborative experimentation and modeling similar to such educational initiatives as MITx and edX. All of these educational and research initiatives also benefit from increased network connection.
Skoltech has already launched a partnership with SURFnet to begin providing for the high-bandwidth needs for this first Center for Research, Education and Innovation (CREI). The new partnership with e-ARENA will extend networking options by leveraging Russian research and education networks RASNet, RUNNet and RBNet, as well as international connections to the pan-European GEANT network and the advanced science network GLORIAD.
Skoltech Acting CIO, Professor Gabrielle Allen, said of the new cooperation, “Robust, world-class cyberinfrastructure is absolutely essential for modern data-intensive science which today takes place in a global setting. As a new institute, we are delighted to be collaborating with e-ARENA and leveraging their long and deep experience in networking to provide Skoltech researchers and educations with necessary e-infrastructure.”
The Network of Performance Facilities proposed in the previous post: Lessons Learned - Part 3 is such example of the modern data-intensive scientific and research application. Skolkovo Institute of Science and Technology through one of its Centres for Research and Innovation (CREI) could become potential international partner in the Consortium.
Wednesday, July 10, 2013
Old "New" Challenge for performance buildings, or Lessons Learned - Part 3
Monitoring data in Performance Building proves to be a challenge for a reason, which may seem unexpected. There is no clear understanding on what data to collect, at which points, how often to take the measurements and for how long to store the data.
One approach is a "bulldozer" approach - take as many data as possible, at as many locations as possible and as frequent as possible. From the first glance this seems to be a bullet proof method - you will never miss anything. In fact, the opposite is true. The amount of data collected quickly becomes overwhelming and unmanageable, and apart from the difficulty of retrieving necessary piece of information, it presents another unexpected challenge. The value of storing the data is in ability to keep track of historical records, because only a relatively long period of time can be representative for the actual performance of any complex system, performance building included. Now, an attempt to estimate what would it take to create a data storage for one such building, utilizing the "bulldozer" approach described above, hits an obstacle - the data warehouse for keeping track of ALL data will cost over $400,000 !! No wonder, it is decided, or rather occurs automatically, that the wast amount of accumulated data is discarded after a relatively short period of time to let the room for the new batch of data. But what about the analysis?
What if we need the data for more than one month, and typically we want to monitor performance for at least a year? How can we even be assured that what we need is there, when what is collected spills over?
Following up on and consistent with what I have discussed previously (see e.g. Lessons Learned - Part 2) there is a need in the agreed upon hierarchy of the data sets, common format of data being collected, stored and retrieved. With the time it may and probably will evolve into the industry standard. But the work needs to be started, or we are going to face the hurdle not unlike or even worse than the Tower of Babylon - not only being unable to speak one language, but not even understand ourselves...
One approach is a "bulldozer" approach - take as many data as possible, at as many locations as possible and as frequent as possible. From the first glance this seems to be a bullet proof method - you will never miss anything. In fact, the opposite is true. The amount of data collected quickly becomes overwhelming and unmanageable, and apart from the difficulty of retrieving necessary piece of information, it presents another unexpected challenge. The value of storing the data is in ability to keep track of historical records, because only a relatively long period of time can be representative for the actual performance of any complex system, performance building included. Now, an attempt to estimate what would it take to create a data storage for one such building, utilizing the "bulldozer" approach described above, hits an obstacle - the data warehouse for keeping track of ALL data will cost over $400,000 !! No wonder, it is decided, or rather occurs automatically, that the wast amount of accumulated data is discarded after a relatively short period of time to let the room for the new batch of data. But what about the analysis?
What if we need the data for more than one month, and typically we want to monitor performance for at least a year? How can we even be assured that what we need is there, when what is collected spills over?
Following up on and consistent with what I have discussed previously (see e.g. Lessons Learned - Part 2) there is a need in the agreed upon hierarchy of the data sets, common format of data being collected, stored and retrieved. With the time it may and probably will evolve into the industry standard. But the work needs to be started, or we are going to face the hurdle not unlike or even worse than the Tower of Babylon - not only being unable to speak one language, but not even understand ourselves...
In order to be able to communicate we need to speak one language.
Thursday, June 20, 2013
Performance Home designs presented at the Council
From
October 2012 through April 2013 Ascent Systems Technologies in cooperation with
Architecture and Technology Department of Thompson Rivers University conducted
a students’ contest for the best design of the performance home for Sun Peaks Resort. 20 actual vacant Sun Peaks lots were chosen by the students. The
competition was completed in April when Ascent Systems Technologies presented a
cash prize for the best performance home design, while Sun Peaks Corporation
supported the initiative by donating two ski passes to the author of the design
which was the best accommodating the resort guidelines. Some of the best designs were presented at the Sun Peaks Municipal Council meeting at on June 24, 2013. The purpose was to show to the Council and to the residents the home built at Sun Peaks Resort can be energy-efficient, environmentally friendly and at the same time seamlessly integrated in the overall look of the resort. The presentation was received with great interest.
Performance homes, together with other efforts undertaken by the Sun Peaks Municipality, will form part of the Sustainable Community concept at the Resort.
Subscribe to:
Posts (Atom)