Geographic Frontiers

Droning On

Navigation

The Intersection of Drones and the Internet of Things


In a previous blog, I referred to Unmanned Aerial Systems, UASs or drones for short, as representing the third revolution in remote sensing.

The real potential of UAV’s will be their integration into a web of sensors that is in turn integrated into the Internet of Things.  Of course, I like to highlight the geographic aspect and refer to it as the Internet of Where Things Are.

The sensorweb concept provides a useful construct of how we need to think of UAS’s operating within the Internet of Things.   Developed over 15 years ago, the concept originally envisioned a distributed network of sensors wirelessly communicating with each other.  The concept has since morphed into connecting these sensors to the worldwide web and recognizing that the sensors often are not static.  Data from remote sensing satellites, manned aircraft, and now UAS’s can be envisioned a components of a sensorweb.  The Open Geospatial Consortium has a Sensor Web Enablement initiative that attempts to define standard models and interfaces for constructing sensorwebs.

The wake-up call for the UAS industry is that drones will be part of a network.  Just as drones will facilitate the collection of data concerning things and feed this data into the Internet of Things, drones and related sensors are also part of the Internet of Things.

As the commercial UAS industry begins to take off in the United States, we tend to think of it being a more flexible and less expensive alternative to manned aircraft when it comes to collecting imagery and other remote sensing data.  There is absolutely nothing wrong with this view.   However, the “next big thing” for the industry will be to integrate UAS’s into a web that aggregates data about where things are and what and how those things are doing.  UAS’s become a routine and essential part of a network of sensors and things and things that are also sensors.

Here are my thoughts on the implications this has UAS-related technology and operations as well as for those who seek to leverage UAS’s.

Exquisite Detail and Extreme Resolution.  Sensors on drones can potentially collected detailed data at extreme resolutions.  Because of small size, UAS’s can get really close.  Because of their low cost to operate, whatever detail they miss on the first pass, can be collected on subsequent passes over a long period a time, in other words, through persistence.  We think of drones operating out-of-doors, but drones will operate in interior spaces.  They may fly through warehouses looking for a particular item tucked away on a high shelf.  They may be equipped with sensors to interrogate passive radio identification tags.  In multiple passes through a warehouse at night, they can conduct a complete inventory.    In a factory or power plant, they can be scheduled to conduct periodic visual inspections of equipment providing better access and obviating the need for inspectors to get on lifts or ladders.   Under the sensorweb concept, software will correlate the data collected by the drones with data collected by static or hand-held sensors providing a comprehensive visual record of the inspection. 

Facilitators for Other Sensors.  We often think of UAS’s in terms carrying on-board sensors that collect data.  Common examples include video imagery, infrared imagery or Light Detection and Ranging (LIDAR). Under the sensor web construct, UASs may be collecting data; they also just as likely could be facilitators for other sensors.  In remote areas, it may not be cost effective to connect sensors with physical communication links and providing each sensor with it’s own large antenna and power source may also be too expensive. Likewise, the data may not have to be accessed for analysis all that often.  For example, there may be a dispersed sensor field to monitor surface subsidence due to underground mining activity.  Since the changes would be small over time, data from the sensors would need to be collected infrequently.  Dispersed water quality sensors might only need to be surveyed infrequently under the assumption that normal industrial activity will have little if no impact is expected.  If an anomaly is noted or a chemical spill occurs data collection frequency can be easily and rapidly increased a UAS would be used to periodically fly over the sensor network and upload data from each sensor at close range via a low power data link.  Likewise, the drone could carry reprogramming instructions for the sensor that could be downlinked as it flies within range.  Those sensors may be carried by people.  In an emergency situation, we think of a drone being launched to provide visual situational awareness. In this age of wearable devices, drones can collect data on the location of workers when wireless communications may be hindered by objects in the building or distance from a wireless relay in the case of working outdoors.  While this may sound a bit creepy from a privacy perspective, it is much more reasonable for first responders or in certain workspaces where employee safety is a concern.   As part of a sensor network, drones may serve to relay video data from cameras worn by individuals on the ground.  In an emergency situation, wearable devices not only provide location and identification information, but also physiological information.  As they move through a disaster site, first responders can apply these devices to victims enabling emergency medical personnel immediately know the location and status of victims.  Small, low flying drones collect the data from these low energy wearable sensors and relay them for correlation with and integration into the evolving sensorweb. This is not much different from “identification friend or foe (IFF)” technology that the military continues to develop for individual soldiers.

New Sensor Types.  As I mentioned, we tend to think of UAS collection in terms of some type of imagery data.  Under the sensor web construct, a wide variety of new sensors will be integrated onto UAV platforms.  I mentioned radio frequency identification (RFID) sensors earlier.  Drones carrying chemical detection sensors could be used for routine air quality monitoring as well as determining the magnitude and composition of accidental chemical leaks or gases from a chemical fire.  Coupled with ground-based sensors on a sensorweb, the UAS mounted sensor would provide a three-dimensional perspective of atmospheric air quality. 

Pre-programmed flights.   If UAS’s are to be a routine and reliable of a sensorweb--particularly in an indoor setting--flight operations will need to be routine.  This means that flights often will be pre-programmed and the UAS’s will operate autonomously.  This currently is anathema to the FAA when flying UAS’s in outdoor airspace.  Until further confidence is developed in UAS operations, the FAA is demanding line-of-sight operations with a pilot constantly monitoring the drone’s flight operations.   One advantage to operating UAS’s indoors is that they are beyond the reach of FAA regulations.  (OSHA will want us to be wearing hard hats and safety glasses.)

Automated correlation and analysis.   One of the key considerations of using drones as part of a sensorweb is to ensure the data they collect is correlated with data collected by other sensors as well as with data the UAS collects over time.  The strength of the sensorweb concept and the Internet of things is that—at its best—it provides a real time, continuous “picture” of where things are and their status.  It will be essential to have processors and software on the net that can sort through all the data being collected—some by drones, some by static sensors, some by wearables—and present the user with accurate, understandable awareness of the things that are important to them.
 
Even in the age of autopilot and fly-by-wire, manned aviation can still legitimately hold onto the romantic notion of the aircraft being an extension of the pilot reliant on his skill and knowledge to successfully fly.   To a certain extent, that same notion has been transferred to UAS’s.  Successful flight is still an extension of the knowledge and skill of a pilot even though he or she is on the ground looking at a computer screen or tablet rather than through a cockpit window.  At the risk of shattering the romance of aviation, the real revolution of UAS will come when through routine and autonomous operations they become just another sensor or relay device plugged into the Internet of Things.   On the one hand, it feeds information about things into the Internet of Things.  On the other hand, the UAS becomes just one more thing on the Internet of Things.


© 2015 Geographic Frontiers
Comments

The Internet of Where Things Are

Fifteen years ago, Kevin Ashton coined the term “The Internet of Things” or IoT.  Like many far-reaching concepts, implementation has had to await expansion and enhancement of underlying technology.  Smaller and less expensive computer chips, increasingly available wireless access to networks, cloud computing, and Big Data management capabilities are far enough along to begin bringing Ashton’s concept into reality.

Last week, AT&T, Cisco Systems, General Electric, IBM and Intel announced the establishment of the Industrial Internet Consortium focused on addressing another aspect that has hampered IoT progress--standards.  Standards are needed for things to effectively communicate data about themselves across the Internet and then be directed to act--again across the Internet--by people or other things. 

Embedded within the IoT concept is knowing where things tied to the Internet are physically located.  With apologies to Mr. Ashton and others, I call this “The Internet of Where Things Are” or IoWTA. 

Why single out the locational aspect of the Internet of Things?  To get our profession to pay attention!  I believe this will be a sea change for the—pick your term—geographic, geospatial or geographic information systems (GIS) community.   While GIS software developers are beginning to explore the implications for their product lines, I believe we are largely oblivious to the freight train rolling down the track.  

IoT relevant software and development projects such as GeoWeb, Digital Earth and--for vetted government users--National Geographic Agency’s (NGA) new Map of the World are important steps along the way; but IoT and IoWTA will change how we think about maps, the type of geographically based data we analyze, how we manage that geographically tagged data and ultimately how we present it to our customers.

Below are five trends we can expect to see over the next several years as IoT develops and matures.  In examining them individually, we can identify examples where we are already moving along the continuum implied by each trend.  The sea change comes as we begin to look at the magnitude of the changes as each trend advances simultaneously.  While not all agreed with The Next Web when they declared 2013 to be “the year of the Internet of Things”, the technology appears to be in place for IoT implementations to move from a linear to an exponential growth path.

Here are five trends we need to be ready for.

1.  From identification to aggregation.  In classic cartography, mapmakers both identify a thing—we often refer to them as features—and identify with varying degrees of accuracy were it belonged in geodetic space.  We find ourselves still spending a considerable amount of time and energy performing that function.  We’ve become more efficient as remote sensing data has become more of a commodity.  We also use the Internet to better distribute that effort, Open Street Map being a prime example. From our geographic or GIS perspective, features are things. Features, and their location, will be increasing identifiable via the Internet.   Sensors in and along rivers will monitor water volumes and channel changes and relay that information to the Internet.  Engineers will file descriptions of new bridges and roads along with locational data and that information will be on the Internet.  Descriptions and locational information concerning power lines, pipelines, office buildings and homes are and increasingly will be discoverable on the Internet.  Of course, the challenge in IoWTA is that manufactured things and other things that move or can be moved will be identifiable on the Internet. They will also self-locate.  Other things that move will be self-identifying and self-locating.  Ranchers will look at a live map display in their kitchen and know where their cattle are at any specific moment.  Tags and deployed sensors will monitor wildlife numbers and movements for biologists. There will be a lot of things out there on the Internet with locational tags. Our challenge is no longer identifying them, but deciding how best to aggregate them to provide the information our clients need whether those things a fixed in geodetic space or move through it.

2.  From transient to persistent.   We tend to think of our analysis and products as snapshots in time.  We compare traffic patterns from one month to the next.  We look at changes to the urban environment from one year to the next.  Any temporal analysis we do comes across as freeze frame photography as we attempt to assess incremental changes across time.  In IoWTA, our maps will be persistent.  They will always be “on”.  We’ll be expected to show how things are moving through geodetic space and interacting with other things in real time. 

3. From layers to models.   I’m sick of layers.  Within the geographic and GIS community, we do a disservice to ourselves and what we do when we describe it as generating layers.  In the world of IoWTA we will do more modeling than layering.  We will be expected to mathematically relate things to each other.  Based on mathematical models, we’ll need to understand how things move through time and multi-dimensional space.  Layers imply two-dimensions.  IoWTA will operate in three dimensions and across time. 

4.  From external to internal.  Traditionally, as geographers and cartographers, we’ve been content with our “bird’s eye” view of the world.   We leave it to architects and computer aided design programs to define interior spaces.  In IoWTA, we will need software tools that allow us to move seamlessly from external space to internal spaces.  It will not longer be sufficient to get a prospective customer from his home to the store.  He will expect us to get him to the specific shelf where the product he wants is located.

5. From descriptions to transactions.   The Activity-Based Intelligence (ABI) concept has been a radical approach to how intelligence organizations such as NGA leverage data from a variety of sensors and sources to develop patterns, understand relationships between people and things, and establish an understanding of the transactions that take place in the context of location.  Although described in a variety of publications, I’m not sure how far this concept has infiltrated outside of the intelligence and defense communities.  IoWTA will require ABI on steroids.  Those same models I mentioned above will not only need to relate things to each other, but how people interact with those things and the actions that need to be taken based on those transactions.  Rather than simply knowing what a thing is and where it is, we will be expected to show how locations and the environment tied to that location affect things that at or potentially moving to or through a location.  We will then be asked to assess how the transactions between things at a location impact people at or moving through the location and what role people will play in those transactions.


From IoWTA 1.0 to IoWTA 2.0

The trends I described above will become increasingly evident as we continue through this decade and move ever closer to “ubiquitous positioning” and a “physical-world web".  (See  http://en.wikipedia.org/wiki/File:Internet_of_Things.png)   I call this IoWTA 1.0.    We are becoming an ever-increasing self-identifying and self-locating population. We have made and continue to make progress in meshing the Internet of Where Things Are with the Internet of Where People Are. Via the Internet, we know how to get to a specific store or restaurant.  When can track the progress of an item we ordered as it moves across the country to our home.  We can get a good idea of what home we might want to buy based on our parameters and the neighborhoods in which they are located.  In IoWTA 1.0, the fidelity with which we--as people--will be able to locate and interact with things will increase exponentially.  Our network connected car will alert us to stop at a specific gas station, because it knows the gas is cheaper there than the one further down the highway closer to the point where we would run out.  Business will no longer have to do daily inventories of expensive equipment they provide to field crews since tags on the equipment will continually update their location.

IoWTA 2.0 will arrive when people are out of the loop and "software agents" and "advanced sensor fusion" allow things to respond to other things in the context of location.   Embedded sensors may indicate a problem has occurred somewhere along a several mile section of pipeline. An unmanned aircraft with a sensor is cued, programmed and launched to examine that stretch of pipeline to identify the exact location of the problem and potentially repair it.  Sensors along roadways—or cars themselves—will detect when slowing has occurred on a major freeway due to an accident.   Automatically, the timing of signal lights will be adjusted on the side streets to improve the flow of traffic as a large volume of cars begin to exit the highway.   A component on a network fails.  A robot is notified to pull a spare, move to the location within the building were the failed component is located, and replace it.  Location is a critical component to each of these thing-to-thing transactions and people will want to see “on the map” how and where these things are operating.


Why are IoT and IoWTA important?


IoT and IoWTA are important because they are really about people.  We use things, and we like things.  We transit across and over things as we move from one thing to another or to other people.  We all want to be more effective and efficient.  I don’t know about you, but in spite of all the technological advancements in GIS, geo-location, and information technology, I still spend too much time trying to find things—things that are fixed in geodetic space and things that move or can be moved.   Most companies feel that way too.  Companies can increase their value by always being able to find their things and keep track of them.  That’s why they’re interested in further developing IoT and why we in the—pick your term—geographic, geospatial or geographic information systems (GIS) community need to be engaged in IoWTA.

© 2014 Geographic Frontiers
Comments

A Word About Terminology (PART 2)



If deciding what to call that thing flying around in the air without a person in it is a challenge, so is describing what it is doing, particularly if its primary purpose is to “look around.”  (As a reminder, those of us in this business prefer to call them Unmanned Aircraft Systems or UAS’s for short).

I am compelled to again inform readers that “looking around” will not be the sole purpose of UAS’s flown domestically.  As the systems and their operation in the National Airspace System mature, they likely will replace crop dusters and some aspects of air freight operations.  However—let’s face it—a major function will be collecting data.  Very good cameras and similar sensors come in small packages and can fit very well on small UAS’s.  Whether its law enforcement, real estate, environmental monitoring, assessing infrastructure or reporting the news, we like pictures; and the “bird’s eye” view can be extremely helpful in increasing our understanding of what’s going on.   It can also make privacy advocates nervous.

My purpose in covering this terminology is not about simply increasing a reader’s vocabulary. I want people to realize that UAS’s won’t have the sole purpose of flying around to look at us.  There will be a lot of data they will collect on infrastructure, weather phenomena, wildlife, ecosystems and things that have little to do with you or me.  We need to understand and use descriptive terminology that relates to these applications.  Second, words carry baggage.  When we talk about what UAS’s will do domestically, we need to use accurate terms to describe it.  There’s no need to create excess baggage.  For some, surveillance is a term that carries a lot of baggage.  Yes, domestic UASs will be used for surveillancewhether monitoring potentially illegal border crossings or helping police locate a bank robber, kidnapper or terrorist who is on the run. But, UASs also will be flown to collect data for maps, to make assessments of damage caused by natural disasters and to make environmental assessments.   I personally think we’ll find that surveillance of people will constitute a small percentage of the aggregate of UAS flight hours.  More often than not, UAS’s will be collecting data on things.  This doesn’t diminish the need to be open about and address potential privacy issues, but we shouldn’t be so quick to throw the baby out with the bath water as some jurisdictions are considering doing by banning UAS’s or offering permits to shoot them down.

The activity of a UAS “looking around” or being used for a  “bird’s eye” view has a number of terms associated with it.  Two terms come to us from the military/intelligence communities:

Surveillance is one that generally associated with UAS’s.  At its simplest, surveillancemeans to keep watch over someone or something.  Add the word persistent as in persistent surveillance, and it means watching all the time.  From an intelligence perspective, being able to conduct persistent surveillance is a good thing.   It means we likely can know where the enemy is and what they are doing at any point in time.  From a domestic perspective, the term is not so positive unless law enforcement is keeping criminals—and not us regular folks—under surveillance.  On the other hand, when we walk into most any store today, we are under persistent surveillance via the security cameras in the store. 

Another military term is reconnaissance.  It means to inspect, observe or survey in order to gain information—generally for military purposes.  In general usage, reconnaissance means one is going over a specific area at a moment in time before one moves onto other areas or “targets” that require an observation.  Surveillance, on the other hand, implies one is loitering over a specific area or  “target” for a period of time to assess what activity is taking place.

Much like their military counterparts, unmanned aircraft systems carrying sensors can do either or both surveillance and reconnaissance.   An unmanned aircraft system could assist law enforcement by conducting surveillance over house during a hostage situation allowing police to know immediately if the suspect attempts to escape.   The same or similar system could conduct reconnaissance by flying over a farmer’s acreage once a week to assess if certain crops are experience stress due to inadequate irrigation.  Likewise, an unmanned aircraft system could periodically conduct reconnaissance by flying along the route of a pipeline to ensure no leaks or other damage has occurred.   However, since reconnaissance generally has a military connotation, this likely is not an appropriate use of the term.
 
While one definition of reconnaissance is to survey, using the term aerial survey to describe UAS data collection efforts doesn’t have that military cachet.  It has a long history of use with civil and commercial aircraft.  To survey means to take a comprehensive view of something.  It also has a mapping connotation since one definition of survey is to determine boundaries or position.  UAS’s will be used to conduct aerial surveys on a wide variety of things.  They may be collecting data on roads and buildings to make maps.  They may survey infrastructure.   For large animals--deer, elk or wild horses as examples--UAS’s could conduct wildlife surveys.  UAS’s will be used to survey crops.  They will likely be used to survey forests.   Survey or aerial survey is a good term for describing how, in many instances, UAS’s will be used.

Another term used by the academic, a civil and commercial communities is remote sensing.  In its broadest sense, remote sensing is the acquisition of information about an object or phenomenon without making physical contact with an object.  However, in general usage, it refers to data collected by aerial (or satellite) sensors.  I’m not certain as to the pedigree of the term, but it usage seemed to be tied to the growth of space-based sensors—Landsat for example—developed by NASA and comparable agencies in other countries to collect data about our planet.  Remote sensing generally has a scientific connotation, but it can be an all-encompassing term for data collected by UAS’s.  After all, they are “remote” from the surface of the earth.

So, UAS’s may be used for surveillance, to conduct aerial surveys, or to collect remote sensing data.  There are other terms that can be applied, but let’s move onto the media UAS’s may use to collect data.   With remote sensing, we’re generally talking about collecting data we describe as being part of the electromagnetic spectrum.  This data may be in the visible range of the electromagnetic spectrum or in “nearby” regions of the spectrum, infrared being the most common.

This leads us to a brief discussion of what it is that these UAS’s will collect.  We’re most accustomed to the idea that UAS’s will collect video imagery.  Interestingly, the term “video” simply refers to the electronic medium for recording, copying and broadcasting of a continuous stream of visual images that show motion.  When we think of surveillance, we think of full motion video imagery.   Since UAS’s—if you accept my thesis-- more often will be used to collect data about things rather than people, they’ll be collecting remote sensing data in a variety of forms.  Some of it will be in the form of images, but those images may be formed from other parts of the electromagnetic spectrum than we normally see. Infrared images are better for assessing plant health.  Images formed from radar sensors can often more accurately measure elevation making such data useful in aerial surveys.  In some instances, there will be no images at all.  UAS’s will carry sensors into thunderstorms and even tornados to collect data to increase our knowledge of weather phenomena.  UAS’s will carry air quality sensors to make environmental assessments or radiological sensors to assess radiation releases related to the rare nuclear accident.

Surveillance, reconnaissance, aerial surveys, remote sensing, full motion video, imagery are all terms used to describe what UAS’s will do and the type of information they collect.  Over the next few years, as we evolve the domestic use of UAS’s in the United States, my hope is our vocabulary expands to fully describe the wide range of uses UAS’s will have and the positive contributions they will make.   No matter how we describe it, we can’t avoid the baggage some words carry.  It’s the excess baggage we want to avoid.

© 2013 Geographic Frontiers

Comments

A Word About Terminology (PART 1)




What is that thing flying around in 
the air without a person in it?

In my youth, I would have called it
a radio-controlled model airplane.  (Life
 was slower then so we had time to use longer, more descriptive terms.

Today, one’s first reaction would be 
to call it a drone

In not sure if it’s due to the
insectival origin of the usage, but proponents don’t really like the term.  They too like longer, more descriptive terms. 

The first drone I learned about, in
 biology class, was a male honeybee, who was stinglessand made no honey.  His only utility was in continuing the
 species.  This concept led to the 
additional meaning of a drone being a parasitic loafer.

Today’s drones don’t seem to fit that definition.  Some military versions certainly can “sting”,
and drones aren’t loafers.   The military 
gets a great deal of use from them, and we expect their use to expand
 exponentially in the domestic market as the FAA develops guidelines for
 allowing them in the National Airspace System by 2015.

More likely, the usage derived from the verb form of making a dull, continued,
 low, monotonous
 sound like a hum or a
buzz.   (Hopefully, readers will detect the irony in the use of the term
“Droning On” as the name for this blog. 
Although you may not agree, it is not my intent to be dull or
 monotonous.)

In
 recent years, the term has taken on a negative connotation as an impersonal 
device over which no one has control--in addition to making a dull, monotonous
 buzzing sound.   It’s that negative connotation that leads 
proponents in this industry to eschew the word usually in favor of longer, more
 descriptive terms.


The 
favorite is Unmanned Aircraft System or UAS. 
It implies that there is not only a unmanned flight vehicle, but a
 system behind it.  It included a data 
link to a control system or ground station where there are PEOPLE operating the 
thing.   Rightfully taking a holistic 
approach, the FAA is developing guidelines for integrating “unmanned aircraft
 systems” not “drones” into the National Airspace System.  They’re as concerned about the data link, the
 control system, and the people operating the thing as they are about the flight 
vehicle itself. 

Unmanned
 aircraft  and Unmanned Aerial Vehicle
(UAV) are also commonly used for drones. 
Intended or not, the connotation here is that one is referring to the 
“thing in the air” and not the control system behind it.  The U.S. Department of Defense has also moved
 away from using Unmanned-Aircraft Vehicle to the UAS term.   (Unmanned-Aircraft Vehicle differentiated 
the things that flew in the air from Unmanned-Ground Vehicle which are also
 used in the Department of Defense.)

When 
the U.S. military began using these in earnest in the Vietnam War, they
 commonly referred to them as Remotely
 Piloted Vehicles or RPVs.   Today, you will occasionally see similar 
terms such as remotely piloted aircraft or remotely operated aircraft.   They 
must have liked these terms because they emphasized that there was a pilot 
involved somewhere.  Is it a statement of
our risk adverse society that we now refer to the aircraft as being “unmanned” to
emphasize that no one will be hurt if they crash or are shot down?  

Some proponents like to class 
unmanned aircraft as “robots” perhaps because it is more tech trendy, and to 
emphasize that these flying things will do real work such as spray fertilizer on
crops, herd cattle, and haul freight.    (Is 
simply flying around looking at things real work?)  Robot
 was coined by a Czech playwright for his 1920 play about—you guessed it—robots 
who eventually revolt against their human creators. The term is derived from a
 Slavic term for “servitude,” “forced labor” or “drudgery.”  The issue I have with the use of the term is 
that it implies a device that operates autonomously.  Some UAS’s may be preprogrammed to conduct
their flight—and therefore operating by definition as a robot—but many, if not
 most, will have someone remotely operating the flight vehicle.  Even if preprogrammed, it is likely that 
someone will be monitoring the flight and can override the program if
 necessary.  The flip side occurs if something
 happens and the flight vehicle looses contact with its controller.   One option, other than simply crashing, is
 for the UAS to have an autonomous program that would activate and immediately 
return it to its start point. 

What is that thing flying around in
the air without a person in it?  Is it a 
drone, a robot or an unmanned aircraft system—aka UAS?  What you call it may depend on your opinion 
of it.  To paraphrase Shakespeare:  that which we call a drone by any other name
 would fly the same.  Those of us in this 
business will call it a UAS and will try to convince you to do the same.

© 2013 Geographic Frontiers







Comments

The Third Revolution in Remote Sensing

Drones—or using the more dignified term, Unmanned Aircraft Systems (UAS’s)—are the third revolution in remote sensing With a low cost to purchase, a low cost to operate and a great deal of flexibility, drones will revolutionize the collection of remote sensing data from the air. With the Federal Aviation Administration mandated to develop a plan for integrating UAS’s in to the U.S.’s National Airspace System by 2015, the revolution is beginning to ramp up.

UAS advocates rightfully foresee a number of uses for drones that are not primarily tied to collecting data. They can be used for herding livestock or deliver needed medicine more quickly and efficiently to remote locations. (Cowboys and bush pilots lookout!) However, a main use of UAS’s—and one that gets the attention of privacy advocates—will be to collect data, particularly pictures or video. These pictures or video--whether for search and rescue, police law enforcement activity, industrial security or disaster response—fall under the esoteric category of remotely sensed data of the earth’s surface.

To understand this context, let’s look at what I consider to be the first and second revolutions of remote sensing.

While James Wallace Black’s 1860 photograph of Boston taken from a hot air balloon at 1200 feet represents the birth of remote sensing, putting cameras on airplanes in World War I represents the first revolution in remote sensing. World War I led to major advances in aerial remote sensing, but more accurate cameras and navigation equipment soon led to the use of aircraft to collect imagery for aerial surveys and making better maps for military and civil applications. One problem with aircraft is they could get shot down if they were flying over an unfriendly area. The 1960 shoot down of a U-2 spy plane by Russia is a significant example of this problem. Less well known is that a U.S. pilot was also shot down over Cuba in 1962 during the Cuban Missile Crisis. Gary Powers successfully parachuted from his aircraft into the hands of Russian captors. Maj Rudolf Anderson was not so fortunate and lost his life in the 1962 incident over Cuba. Another problem with manned aircraft is--independent of flying for military or civilian purposes—they can develop problems and crash, sometimes killing the pilot. On August 31, 2012, a plane carrying a Pueblo County (Colorado) Sheriff’s Deputy and piloted by a retired Pueblo Police Department Captain crashed in the San Isabel National Forest, killing both of them. The two were searching for plots of illegal marijuana growth. Fast forward a few years and a UAS can be used for the same mission. Even if it crashed, the pilot and sensor operator would still be going home to have dinner with their families.

A little over 40 years after the end of World War I brought the second revolution in remote sensing—the use of satellites. In 1959, the Explorer 6 satellite took the first pictures of the earth from space although the resolution was considerably worse than Black’s 1860 photograph of Boston from a balloon. In 1960, the United States had its first successful (after a number of failures) photo reconnaissance satellite. Since the program was then highly classified, only a relatively few experienced it. Some might place the start of the satellite imagery revolution at 1972 with the launch of the first Landsat system that used a multi-spectral scanner to collect remote sensing imagery data. This was the first publicly available satellite remote sensing data that could be used to support meaningful analysis related to physical geography, environmental issues and natural resources. (Seeing this “new” Landsat imagery in my college Introduction to Remote Sensing class in the late 1970’s was one of the things that led me into a 35 year career in this business.) As the years have gone by, satellite imagery has become commercialized, although most are still highly subsidized by their host country. With resolutions now better that a half meter, and readily accessible for viewing on Google Maps and Google Earth, imagery that I considered incredible and unique at the beginning of my career, is now—well—common. Satellite imagery offers a number of advantages over aircraft imagery. Satellites can collect imagery over almost any part of the earth. There are no pilots to get shot down or to die in crashes. However, the best resolution from satellite imagery does not match the best resolution available from aircraft. There are sensors, LIDAR and full motion video as examples, that have yet to be integrated onto satellites. Also, satellites can’t fly where you want them to. Neither can you have them fly over and stick around when you want them too unlike what was portrayed in “Enemy of the State”.

Now after another 40 years or so, we are experiencing the third revolution in remote sensing that will be brought about by UAS’s or drones if you like. Like the two previous revolutions, the military and associated government expenditures started the revolutions which eventually evolved into the commercial realm. UAS’s, then called Remotely Piloted Vehicles or RPV’s, made their debut with the U.S. military during the Vietnam War. The Israeli’s used drones with great success during the 1982 war in southern Lebanon. While the U.S. military made some use of UAS’s during the 1991 Gulf War, UAS use really took off (pun intended) after 911. It didn’t take long for enterprising personnel to realize, in addition to cameras, a Hellfire missile could be launched from a UAS turning a surveillance and reconnaissance asset into an Unmanned Combat Air Vehicle or UCAV.

Rightly or wrongly, UAS’s carry the baggage of their military use with them as they migrate into the civilian sector. While UAS’s don’t provide the world wide access of a satellite, they provide a number of advantages. They have significantly lower operating cost than manned aircraft, and they offer greater flexibility. You don’t need an airfield to launch small UAS’s, and they can be easily controlled by laptops or pad computers. Smaller sensors mean they can be just as effective in their data collection as their larger, manned cousins.

Whether we consciously recognize it, Google Earth and Google Maps have introduced the public to ubiquitous aerial imagery—predominately from satellites. UAS’s will make imagery and remote sensing data readily available to smaller companies, farmers and local jurisdictions to name but a few. With the help of government subsidies, commercial remote sensing satellites will survive this third revolution in remote sensing. Not too many years from now, manned commercial aerial remote sensing platforms will become a thing of the past. If you have an aerial survey company, you need to be looking at UAS options since they will be cheaper to operate than manned aircraft when it comes to collecting remote sensing data. A competitor with a UAS will beat your price every time.

There may be some who are leery of this revolution. All revolutions have their share of opponents. However, Peter W. Singer, author of “Wired for War," characterizes the reality accurately, “The debate over drones is like debating the merits of computers in 1979: They are here to stay, and the boom has barely begun. We are at the Wright Brothers Flier stage of this. There's no stopping this technology. Anybody who thinks they can put this genie back in the box-that's silliness."

The challenge for those of us in the UAS business is to listen to potential customers and to the public to ensure we all use this emerging capability responsibly.

© 2013 Geographic Frontiers
Comments