Skip to main content

Radha Poovendran

  • Professor and Chair

Appointments

Chair, Electrical Engineering
Professor, Electrical Engineering
Adjunct Professor, Aeronautics & Astronautics

Biography

Radha Poovendran is professor and chair of the Department of Electrical Engineering at the University of Washington. He is the founding director of the Network Security Lab and is a founding member and associate director of research for the UW’s Center for Excellence in Information Assurance Research and Education. He has also been a member of the advisory boards for Information Security Education and Networking Education Outreach at UW. In collaboration with NSF, he served as the chair and principal investigator for a Visioning Workshop on Smart and Connected Communities Research and Education in 2016.

Poovendran’s research focuses on wireless and sensor network security, adversarial modeling, privacy and anonymity in public wireless networks and cyber-physical systems security. He co-authored a book titled Submodularity in Dynamics and Control of Networked Systems and co-edited a book titled Secure Localization and Time Synchronization in Wireless Ad Hoc and Sensor Networks. He is also an associate editor for ACM Transactions on Sensor Networks.

Poovendran is a Fellow of IEEE and has received various awards including Distinguished Alumni Award, ECE Department, University of Maryland, College Park, 2016; NSA LUCITE Rising Star 1999; NSF CAREER 2001; ARO YIP 2002; ONR YIP 2004; PECASE 2005; and Kavli Fellow of the National Academy of Sciences 2007.

Research Interests

Security, biosystems and machine learning.

10uweeViewNews Object
(
    [_showAnnouncements:protected] => 
    [_showTitle:protected] => 
    [showMore] => 
    [_type:protected] => spotlight
    [_from:protected] => person
    [_args:protected] => Array
        (
            [post_type] => spotlight
            [date_query] => Array
                (
                    [0] => Array
                        (
                            [after] => Array
                                (
                                    [year] => 2015
                                    [month] => 5
                                    [day] => 26
                                )

                        )

                )

            [meta_query] => Array
                (
                    [relation] => AND
                    [0] => Array
                        (
                            [key] => type
                            [value] => news
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => subjects
                            [value] => "800"
                            [compare] => LIKE
                        )

                )

            [posts_per_page] => 6
            [post_status] => publish
        )

    [_jids:protected] => 
    [_taxa:protected] => Array
        (
        )

    [_meta:protected] => Array
        (
            [0] => Array
                (
                    [key] => type
                    [value] => news
                    [compare] => LIKE
                )

            [1] => Array
                (
                    [key] => subjects
                    [value] => "800"
                    [compare] => LIKE
                )

        )

    [_metarelation:protected] => AND
    [_results:protected] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 10263
                    [post_author] => 12
                    [post_date] => 2017-04-03 14:20:47
                    [post_date_gmt] => 2017-04-03 21:20:47
                    [post_content] => [caption id="attachment_10269" align="alignleft" width="415"]google-api_1-2 Graduate student Baicen Xiao, Professor and Chair Radha Poovendran and graduate student Hossein Hosseini.[/caption]

Security researchers in the Department of Electrical Engineering have shown that Google’s new AI tool for videos can be easily tricked by quick video editing. The tool, which uses machine learning to automatically analyze and label video content, can be deceived by inserting a photograph periodically and at a very low rate into videos. After the researchers inserted a quick-playing image of a car into a video about animals, the system returned results suggesting the video was about an Audi instead of animals.

Google recently released its Cloud Video Intelligence API to help developers build applications that can automatically recognize objects and search for content within videos. Automated video annotation would be a breakthrough technology. For example, it could help law enforcement efficiently search surveillance videos, sports fans instantly find the moment a goal was scored or video hosting sites filter out inappropriate content.

Google launched a demonstration website that allows anyone to use the tool. The API quickly identifies and annotates key objects within a video. The API website says the system can be used to “separate signal from noise, by retrieving relevant information at the video, shot or per frame” level.

In a new research paper, doctoral students Hossein Hosseini and Baicen Xiao and Professor Radha Poovendran, demonstrated that the API can be deceived by slightly manipulating the videos. They showed one can subtly modify the video by inserting an image into it, so that the system returns only the labels related to the inserted image.

The same research team recently showed that Google’s machine-learning-based platform designed to identify and filter comments from internet trolls can be easily tricked by typos, misspelling abusive words or adding incorrect punctuation.

“Machine learning systems are generally designed to yield the best performance in benign settings. But in real-world applications, these systems are susceptible to intelligent subversion or attacks,” said senior author Radha Poovendran, chair of the UW electrical engineering department and director of the Network Security Lab in a recent UW Today article. “Designing systems that are robust and resilient to adversaries is critical as we move forward in adopting the AI products in everyday applications.”

The researchers provided an example of the API’s output for a sample video named “animals.mp4,” which is provided by the API website. Google’s tool does indeed accurately identify the video labels.

original

The researchers then inserted the following image of an Audi car into the video once every two seconds. The modification is hardly visible, since the image is added once every 50 video frames, for a frame rate of 25.

car

The following figure shows a screenshot of the API’s output for the altered video. In this example, the Google tool shows with high confidence that the altered video is mostly about the car, instead of the animals.

fake

“Such vulnerability of the video annotation system seriously undermines its usability in real-world applications,” said lead author and UW electrical engineering doctoral student Hossein Hosseini for the article. “It’s important to design the system such that it works equally well in adversarial scenarios.”

“Our Network Security Lab research typically works on the foundations and science of cybersecurity,” said Poovendran for the article, the lead principal investigator of a recently awarded MURI grant, where adversarial machine learning is a significant component. “But our focus also includes developing robust and resilient systems for machine learning and reasoning systems that need to operate in adversarial environments for a wide range of applications.”

The research is funded by the National Science Foundation, Office of Naval Research and Army Research Office.

- -

This news originally appeared in a UW Today article by Jennifer Langston.

More News:

[post_title] => UW Security Researchers Show that Google’s AI Tool for Video Searches Can Be Easily Deceived [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => uw-security-researchers-show-that-googles-ai-tool-for-video-searches-can-be-easily-deceived [to_ping] => [pinged] => [post_modified] => 2017-05-01 17:16:32 [post_modified_gmt] => 2017-05-02 00:16:32 [post_content_filtered] => [post_parent] => 0 [guid] => http://www.ee.washington.edu/?post_type=spotlight&p=10263 [menu_order] => 19 [post_type] => spotlight [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 10492 [post_author] => 12 [post_date] => 2017-04-25 11:06:37 [post_date_gmt] => 2017-04-25 18:06:37 [post_content] => [caption id="attachment_10269" align="alignleft" width="419"]google-api_1-2 Graduate student Baicen Xiao, Professor and Chair Radha Poovendran and graduate student Hossein Hosseini.[/caption] University of Washington researchers have shown that Google’s new tool that uses machine learning to automatically analyze images can be defeated by adding noise. Google recently released its Cloud Vision API to help developers to build applications that can quickly recognize objects, detect faces and identify and read texts contained within images. For any input image, the API also determines how likely it is that the image contains inappropriate contents, including adult, spoof, medical or violence contents. In a new research paper, the UW electrical engineers and security expert team, consisting of doctoral students Hossein Hosseini and Baicen Xiao and Professor Radha Poovendran, demonstrated that the API can be deceived by adding a small amount of noise to images. After inputting the noisy image, the API outputs irrelevant labels, does not detect faces and fails to identify any text. The same research team recently demonstrated the vulnerability of two other Google’s machine-learning-based platforms for detecting toxic comments and analyzing videos. They showed that toxic comment detection system Perspective can be easily deceived by typos, misspelling offensive words or adding unnecessary punctuation. They also showed that the Cloud Vision API can be deceived by inserting a photograph into a video at a very low rate. “Machine learning systems are generally designed to yield the best performance in benign settings. But in real-world applications, these systems are susceptible to intelligent subversion or attacks,” said senior author Radha Poovendran, chair of the UW electrical engineering department and director of the Network Security Lab. “Designing systems that are robust and resilient to adversaries is critical as we move forward in adopting the AI products in everyday applications.” As can be seen in the following examples, the API wrongly labels a noisy image of a teapot as “biology,” a noisy image of a house as “ecosystem” and a noisy image of an airplane as “bird.” In all cases, the original object is easily recognizable from noisy images. screen-shot-2017-04-25-at-10-55-28-am The fragility of the AI system can have negative consequences. For example, a search engine based on the API may suggest irrelevant images to users, or an image filtering system can be bypassed by adding noise to an image with inappropriate content. “Such vulnerability of the image analysis system undermines its usability in real-world applications,” said lead author and UW electrical engineering doctoral student Hossein Hosseini. “It’s important to design the system such that it works equally well in adversarial scenarios.” Researchers are a part of the UW’s Network Security Lab, which was founded Professor Poovendran in 2001. The lab works on the foundations of science and cybersecurity in critical networks. Their research also investigates development robust and resilient systems, like the Google AI, that need to function in adversarial environments for a wide range of applications. The research is funded by the National Science Foundation, Office of Naval Research and Army Research Office. More News:  [post_title] => UW Security Researchers Show that Google’s AI Tool for Image Analysis Cannot Cope with Noise [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => uw-security-researchers-show-that-googles-ai-tool-for-image-analysis-cannot-cope-with-noise [to_ping] => [pinged] => [post_modified] => 2017-05-01 11:39:32 [post_modified_gmt] => 2017-05-01 18:39:32 [post_content_filtered] => [post_parent] => 0 [guid] => http://www.ee.washington.edu/?post_type=spotlight&p=10492 [menu_order] => 28 [post_type] => spotlight [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 10049 [post_author] => 12 [post_date] => 2017-02-28 13:24:49 [post_date_gmt] => 2017-02-28 21:24:49 [post_content] => [caption id="attachment_10052" align="alignleft" width="434"]nsl-perspective_team-photo_2 The UW electrical engineering research team includes (left to right) Professor and Chair Radha Poovendran, doctoral student Hossein Hosseini, Assistant Professor Baosen Zhang and Assistant Professor Sreeram Kannan (not pictured.).[/caption]

University of Washington electrical engineering researchers have shown that Google’s new machine learning-based system to identify toxic comments in online discussion forums can be bypassed by simply misspelling or adding unnecessary punctuation to abusive words, such as “idiot” or “moron.”

Perspective is a project by Google’s technology incubator Jigsaw, which uses artificial intelligence to combat internet trolls and promote more civil online discussion by automatically detecting online insults, harassment and abusive speech.  The company launched a demonstration website on Feb. 23 that allows anyone to type in a phrase and see its “toxicity score” — a measure of how rude, disrespectful or unreasonable a particular comment is.

In a paper posted Feb. 27 on the e-print repository arXiv, the UW electrical engineers and security experts demonstrated that the early stage technology system can be deceived by using common adversarial tactics. They showed one can subtly modify a phrase that receives a high toxicity score so that it contains the same abusive language but receives a low toxicity score.

Given that news platforms such as The New York Times and other media companies are exploring how the system could help curb harassment and abuse in online comment areas or social media, the UW researchers evaluated Perspective in adversarial settings. They showed that the system is vulnerable to both missing incendiary language and falsely blocking non-abusive phrases.

“Machine learning systems are generally designed to yield the best performance in benign settings. But in real-world applications, these systems are susceptible to intelligent subversion or attacks,” said senior author Radha Poovendran, chair of the UW electrical engineering department and director of the Network Security Lab. “We wanted to demonstrate the importance of designing these machine learning tools in adversarial environments. Designing a system with a benign operating environment in mind and deploying it in adversarial environments can have devastating consequences.”

To solicit feedback and invite other researchers to explore the strengths and weaknesses of using machine learning as a tool to improve online discussions, Perspective developers made their experiments, models and data publicly available along with the tool itself.

In the examples below on hot-button topics of climate change, Brexit and the recent U.S. election — which were taken directly from the Perspective API website — the UW team simply misspelled or added extraneous punctuation or spaces to the offending words, which yielded much lower toxicity scores. For example, simply changing “idiot” to “idiiot” reduced the toxicity rate of an otherwise identical comment from 84% to 20%.

nsl-google-perspective_graphic-1

In the examples below, the researchers also showed that the system does not assign a low toxicity score to a negated version of an abusive phrase.

nsl-google-perspective_graphic-2

The researchers also observed that the duplicitous changes often transfer among different phrases — once an intentionally misspelled word was given a low toxicity score in one phrase, it was also given a low score in another phrase. That means an adversary could create a “dictionary” of changes for every word and significantly simplify the attack process.

“There are two metrics for evaluating the performance of a filtering system like a spam blocker or toxic speech detector; one is the missed detection rate and the other is the false alarm rate,” said lead author and UW electrical engineering doctoral student Hossein Hosseini. “Of course scoring the semantic toxicity of a phrase is challenging, but deploying defensive mechanisms both in algorithmic and system levels can help the usability of the system in real-world settings.”

The research team suggests several techniques to improve the robustness of toxic speech detectors, including applying a spellchecking filter prior to the detection system, training the machine learning algorithm with adversarial examples and blocking suspicious users for a period of time.

“Our Network Security Lab research is typically focused on the foundations and science of cybersecurity,” said Poovendran, the lead principal investigator of a recently awarded MURI grant, of which adversarial machine learning is a significant component. “But our expanded focus includes developing robust and resilient systems for machine learning and reasoning systems that need to operate in adversarial environments for a wide range of applications.”

Co-authors include UW electrical engineering assistant professors Sreeram Kannan and Baosen Zhang.

The research is funded by the National Science Foundation, the Office of Naval Research and the Army Research Office.

More News:

[post_title] => UW Security Researchers Show that Google’s AI Platform for Defeating Internet Trolls Can be Easily Deceived [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => uw-security-researchers-show-that-googles-ai-platform-for-defeating-internet-trolls-can-be-easily-deceived [to_ping] => [pinged] => [post_modified] => 2017-05-01 16:55:08 [post_modified_gmt] => 2017-05-01 23:55:08 [post_content_filtered] => [post_parent] => 0 [guid] => http://www.ee.washington.edu/?post_type=spotlight&p=10049 [menu_order] => 32 [post_type] => spotlight [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 5090 [post_author] => 15 [post_date] => 2016-07-07 18:01:56 [post_date_gmt] => 2016-07-07 18:01:56 [post_content] => ScreenShot2016-06-16at2.50.43PMUniversity of Washington Electrical Engineering Chair Radha Poovendran was honored with a 2016 ECE Distinguished Alumni Award from the University of Maryland (UMD). Professor Poovendran was one of three recipients awarded with the honor. The award was presented to him by his PhD advisor, Professor John Baras, at the UMD award ceremony on May 20. The Electrical and Computer Engineering (ECE) Distinguished Alumni Award was established in 2012 to recognize alumni who have made significant and meritorious contributions to their fields. Poovendran was nominated and honored for his major influence to the science and engineering field of cyber security. "Radha obtained significant and breakthrough results in his PhD thesis and early in his career on the security of group communications, the most common form of communications and data exchanges in the Internet,” Baras said on his nomination of Poovendran. “Radha developed an outstanding research program with several novel theoretical and practical results on network security. Additionally, Radha is a leading contributor to the emerging foundations of the Science of Security." Poovendran received his PhD in electrical and computer engineering from the University of Maryland in 1999. He attributes many of his achievements today to having an excellent faculty advisor in Baras, as well as his own outstanding students at UW EE and great colleagues and collaborators at the UW and other organizations. “This recognition is very personal, because it was decided by the faculty who helped and mentored me to be who I am today,” Poovendran said. “The event itself had a fun atmosphere, with faculty and staff who were part of my student life at the University of Maryland.” [post_title] => Radha Poovendran Receives Distinguished Alumni Award [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => radha-poovendran-receives-distinguished-alumni-award [to_ping] => [pinged] => [post_modified] => 2016-09-28 22:47:39 [post_modified_gmt] => 2016-09-28 22:47:39 [post_content_filtered] => [post_parent] => 0 [guid] => http://hedy.ee.washington.edu/?post_type=spotlight&p=5090 [menu_order] => 104 [post_type] => spotlight [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 4801 [post_author] => 15 [post_date] => 2016-04-27 21:45:53 [post_date_gmt] => 2016-04-27 21:45:53 [post_content] => 2016-04-25_AlumniBreakfast The UW EE Community celebrated our alums this weekend at Discovery Days on Saturday, April 23, 2016. Engineering Discovery Days is a two-day event sponsored by the College of Engineering allowing students and faculty from all UW Engineering departments to share their work with students and teachers from area schools, families and the community. We were thrilled to host our UW EE alums and their families for breakfast on Saturday morning. More than 80 alumni attendees and their family members were in attendance. “We are delighted to have had alumni and their families join us for the event this past weekend,” said UW EE Chair Radha Poovendran. “We were excited to create a forum to allow families and alumni to mix together casually and hear about great new things happening in the department.” Poovendran presented an update about UW EE’s exciting new engineering entrepreneurial capstone courses, and also provided information about the effort to change the department name to correctly reflect UW EE’s computer engineering efforts. UW EE is continuing to plan alumni gatherings both in the Seattle region and in other areas of the country.  “I am personally looking forward to having more alumni events to closely connect the department and the community,” Poovendran said. Thanks to our UW EE alums for coming out and connecting with us this weekend! We hope to see you again next year for this fantastic annual gathering. You can view the full photo album from this event on Flickr.   [post_title] => UW EE Hosts Alumni Breakfast for UW Discovery Days [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => uw-ee-hosts-alumni-breakfast-for-uw-discovery-days [to_ping] => [pinged] => [post_modified] => 2017-03-21 15:04:25 [post_modified_gmt] => 2017-03-21 22:04:25 [post_content_filtered] => [post_parent] => 0 [guid] => http://hedy.ee.washington.edu/?post_type=spotlight&p=4801 [menu_order] => 110 [post_type] => spotlight [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 4767 [post_author] => 15 [post_date] => 2016-04-10 22:42:43 [post_date_gmt] => 2016-04-10 22:42:43 [post_content] => To protect against a new type of continuous computer hacking attack, known as advanced persistent threats, a research team led by Department Chair Radha Poovendran has received a five-year $7.5 million Department of Defense Multidisciplinary University Research Initiative (MURI) grant. The highly competitive grant is one of 23 MURI awards, totaling more than $162 million, that support interdisciplinary research by teams of investigators in various science and engineering disciplines. The grants support research that has the potential to improve the nation’s security and expand military capabilities. “Unlike conventional viruses, these threats exploit vulnerabilities and persist over a very long time and they’re very difficult to detect,” said principal investigator Radha Poovendran, chair of the UW Department of Electrical Engineering and Director of the Network Security Lab, which he founded in 2001. “Right now, there is no good understanding of the interactions in these complex cyberattacks, or how to mitigate them.” The UW-led MURI team also includes co-investigator and electrical engineering associate professor Maryam Fazel and researchers from the University of California, Berkeley; the University of California, Santa Barbara; Georgia Tech and the University of Illinois. The award was granted through the Office of Naval Research. Initial research efforts were also funded by the National Science Foundation’s (NSF) Cyber-Physical Systems Program, administered by NSF Program Director David Corman. The research team will develop a novel game theory framework to address the continuous computer hacking attacks, which are essentially a game played between the system and adversary, where each is constantly trying to outsmart the other. A unique trait of advanced persistent threats is that they consist of a variety of different attacks over time. Economic game theory, which most modeling methods are grounded in, does not work well in this type of attack. To develop the new framework, the researchers will use a combination of statistical modeling, adaptive game theory, machine learning and control and systems theory. They plan to model the strategic interactions between the malware attacks and develop a methodology to determine which side is “gaining” or “losing” in the attack, which will enable the system to know when to activate a specific defense. See Also: [post_title] => UW EE Wins $7.5 Million MURI Grant to Defend Advanced Cyberattacks [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => uw-ee-wins-7-5-million-muri-grant-to-defend-advanced-cyberattacks [to_ping] => [pinged] => [post_modified] => 2016-11-22 16:08:38 [post_modified_gmt] => 2016-11-23 00:08:38 [post_content_filtered] => [post_parent] => 0 [guid] => http://hedy.ee.washington.edu/?post_type=spotlight&p=4767 [menu_order] => 118 [post_type] => spotlight [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [_numposts:protected] => 6 [_rendered:protected] => 1 [_classes:protected] => Array ( [0] => block--spotlight-tiles ) [_finalHTML:protected] => [_postID:protected] => 800 [_errors:protected] => Array ( ) [_block:protected] => [_db:protected] => WP_Query Object ( [query] => Array ( [post_type] => spotlight [date_query] => Array ( [0] => Array ( [after] => Array ( [year] => 2015 [month] => 5 [day] => 26 ) ) ) [meta_query] => Array ( [relation] => AND [0] => Array ( [key] => type [value] => news [compare] => LIKE ) [1] => Array ( [key] => subjects [value] => "800" [compare] => LIKE ) ) [posts_per_page] => 6 [post_status] => publish ) [query_vars] => Array ( [post_type] => spotlight [date_query] => Array ( [0] => Array ( [after] => Array ( [year] => 2015 [month] => 5 [day] => 26 ) ) ) [meta_query] => Array ( [relation] => AND [0] => Array ( [key] => type [value] => news [compare] => LIKE ) [1] => Array ( [key] => subjects [value] => "800" [compare] => LIKE ) ) [posts_per_page] => 6 [post_status] => publish [error] => [m] => [p] => 0 [post_parent] => [subpost] => [subpost_id] => [attachment] => [attachment_id] => 0 [name] => [static] => [pagename] => [page_id] => 0 [second] => [minute] => [hour] => [day] => 0 [monthnum] => 0 [year] => 0 [w] => 0 [category_name] => [tag] => [cat] => [tag_id] => [author] => [author_name] => [feed] => [tb] => [paged] => 0 [meta_key] => [meta_value] => [preview] => [s] => [sentence] => [title] => [fields] => [menu_order] => [embed] => [category__in] => Array ( ) [category__not_in] => Array ( ) [category__and] => Array ( ) [post__in] => Array ( ) [post__not_in] => Array ( ) [post_name__in] => Array ( ) [tag__in] => Array ( ) [tag__not_in] => Array ( ) [tag__and] => Array ( ) [tag_slug__in] => Array ( ) [tag_slug__and] => Array ( ) [post_parent__in] => Array ( ) [post_parent__not_in] => Array ( ) [author__in] => Array ( ) [author__not_in] => Array ( ) [orderby] => menu_order [order] => ASC [ignore_sticky_posts] => [suppress_filters] => [cache_results] => 1 [update_post_term_cache] => 1 [lazy_load_term_meta] => 1 [update_post_meta_cache] => 1 [nopaging] => [comments_per_page] => 50 [no_found_rows] => ) [tax_query] => WP_Tax_Query Object ( [queries] => Array ( ) [relation] => AND [table_aliases:protected] => Array ( ) [queried_terms] => Array ( ) [primary_table] => wp_posts [primary_id_column] => ID ) [meta_query] => WP_Meta_Query Object ( [queries] => Array ( [0] => Array ( [key] => type [value] => news [compare] => LIKE ) [1] => Array ( [key] => subjects [value] => "800" [compare] => LIKE ) [relation] => AND ) [relation] => AND [meta_table] => wp_postmeta [meta_id_column] => post_id [primary_table] => wp_posts [primary_id_column] => ID [table_aliases:protected] => Array ( [0] => wp_postmeta [1] => mt1 ) [clauses:protected] => Array ( [wp_postmeta] => Array ( [key] => type [value] => news [compare] => LIKE [alias] => wp_postmeta [cast] => CHAR ) [mt1] => Array ( [key] => subjects [value] => "800" [compare] => LIKE [alias] => mt1 [cast] => CHAR ) ) [has_or_relation:protected] => ) [date_query] => WP_Date_Query Object ( [queries] => Array ( [0] => Array ( [after] => Array ( [year] => 2015 [month] => 5 [day] => 26 ) [column] => post_date [compare] => = [relation] => AND ) [column] => post_date [compare] => = [relation] => AND ) [relation] => AND [column] => wp_posts.post_date [compare] => = [time_keys] => Array ( [0] => after [1] => before [2] => year [3] => month [4] => monthnum [5] => week [6] => w [7] => dayofyear [8] => day [9] => dayofweek [10] => dayofweek_iso [11] => hour [12] => minute [13] => second ) ) [request] => SELECT SQL_CALC_FOUND_ROWS wp_posts.ID FROM wp_posts INNER JOIN wp_postmeta ON ( wp_posts.ID = wp_postmeta.post_id ) INNER JOIN wp_postmeta AS mt1 ON ( wp_posts.ID = mt1.post_id ) WHERE 1=1 AND ( wp_posts.post_date > '2015-05-26 23:59:59' ) AND ( ( wp_postmeta.meta_key = 'type' AND wp_postmeta.meta_value LIKE '%news%' ) AND ( mt1.meta_key = 'subjects' AND mt1.meta_value LIKE '%\"800\"%' ) ) AND wp_posts.post_type = 'spotlight' AND ((wp_posts.post_status = 'publish')) GROUP BY wp_posts.ID ORDER BY wp_posts.menu_order ASC LIMIT 0, 6 [posts] => Array ( [0] => WP_Post Object ( [ID] => 10263 [post_author] => 12 [post_date] => 2017-04-03 14:20:47 [post_date_gmt] => 2017-04-03 21:20:47 [post_content] => [caption id="attachment_10269" align="alignleft" width="415"]google-api_1-2 Graduate student Baicen Xiao, Professor and Chair Radha Poovendran and graduate student Hossein Hosseini.[/caption]

Security researchers in the Department of Electrical Engineering have shown that Google’s new AI tool for videos can be easily tricked by quick video editing. The tool, which uses machine learning to automatically analyze and label video content, can be deceived by inserting a photograph periodically and at a very low rate into videos. After the researchers inserted a quick-playing image of a car into a video about animals, the system returned results suggesting the video was about an Audi instead of animals.

Google recently released its Cloud Video Intelligence API to help developers build applications that can automatically recognize objects and search for content within videos. Automated video annotation would be a breakthrough technology. For example, it could help law enforcement efficiently search surveillance videos, sports fans instantly find the moment a goal was scored or video hosting sites filter out inappropriate content.

Google launched a demonstration website that allows anyone to use the tool. The API quickly identifies and annotates key objects within a video. The API website says the system can be used to “separate signal from noise, by retrieving relevant information at the video, shot or per frame” level.

In a new research paper, doctoral students Hossein Hosseini and Baicen Xiao and Professor Radha Poovendran, demonstrated that the API can be deceived by slightly manipulating the videos. They showed one can subtly modify the video by inserting an image into it, so that the system returns only the labels related to the inserted image.

The same research team recently showed that Google’s machine-learning-based platform designed to identify and filter comments from internet trolls can be easily tricked by typos, misspelling abusive words or adding incorrect punctuation.

“Machine learning systems are generally designed to yield the best performance in benign settings. But in real-world applications, these systems are susceptible to intelligent subversion or attacks,” said senior author Radha Poovendran, chair of the UW electrical engineering department and director of the Network Security Lab in a recent UW Today article. “Designing systems that are robust and resilient to adversaries is critical as we move forward in adopting the AI products in everyday applications.”

The researchers provided an example of the API’s output for a sample video named “animals.mp4,” which is provided by the API website. Google’s tool does indeed accurately identify the video labels.

original

The researchers then inserted the following image of an Audi car into the video once every two seconds. The modification is hardly visible, since the image is added once every 50 video frames, for a frame rate of 25.

car

The following figure shows a screenshot of the API’s output for the altered video. In this example, the Google tool shows with high confidence that the altered video is mostly about the car, instead of the animals.

fake

“Such vulnerability of the video annotation system seriously undermines its usability in real-world applications,” said lead author and UW electrical engineering doctoral student Hossein Hosseini for the article. “It’s important to design the system such that it works equally well in adversarial scenarios.”

“Our Network Security Lab research typically works on the foundations and science of cybersecurity,” said Poovendran for the article, the lead principal investigator of a recently awarded MURI grant, where adversarial machine learning is a significant component. “But our focus also includes developing robust and resilient systems for machine learning and reasoning systems that need to operate in adversarial environments for a wide range of applications.”

The research is funded by the National Science Foundation, Office of Naval Research and Army Research Office.

- -

This news originally appeared in a UW Today article by Jennifer Langston.

More News:

[post_title] => UW Security Researchers Show that Google’s AI Tool for Video Searches Can Be Easily Deceived [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => uw-security-researchers-show-that-googles-ai-tool-for-video-searches-can-be-easily-deceived [to_ping] => [pinged] => [post_modified] => 2017-05-01 17:16:32 [post_modified_gmt] => 2017-05-02 00:16:32 [post_content_filtered] => [post_parent] => 0 [guid] => http://www.ee.washington.edu/?post_type=spotlight&p=10263 [menu_order] => 19 [post_type] => spotlight [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 10492 [post_author] => 12 [post_date] => 2017-04-25 11:06:37 [post_date_gmt] => 2017-04-25 18:06:37 [post_content] => [caption id="attachment_10269" align="alignleft" width="419"]google-api_1-2 Graduate student Baicen Xiao, Professor and Chair Radha Poovendran and graduate student Hossein Hosseini.[/caption] University of Washington researchers have shown that Google’s new tool that uses machine learning to automatically analyze images can be defeated by adding noise. Google recently released its Cloud Vision API to help developers to build applications that can quickly recognize objects, detect faces and identify and read texts contained within images. For any input image, the API also determines how likely it is that the image contains inappropriate contents, including adult, spoof, medical or violence contents. In a new research paper, the UW electrical engineers and security expert team, consisting of doctoral students Hossein Hosseini and Baicen Xiao and Professor Radha Poovendran, demonstrated that the API can be deceived by adding a small amount of noise to images. After inputting the noisy image, the API outputs irrelevant labels, does not detect faces and fails to identify any text. The same research team recently demonstrated the vulnerability of two other Google’s machine-learning-based platforms for detecting toxic comments and analyzing videos. They showed that toxic comment detection system Perspective can be easily deceived by typos, misspelling offensive words or adding unnecessary punctuation. They also showed that the Cloud Vision API can be deceived by inserting a photograph into a video at a very low rate. “Machine learning systems are generally designed to yield the best performance in benign settings. But in real-world applications, these systems are susceptible to intelligent subversion or attacks,” said senior author Radha Poovendran, chair of the UW electrical engineering department and director of the Network Security Lab. “Designing systems that are robust and resilient to adversaries is critical as we move forward in adopting the AI products in everyday applications.” As can be seen in the following examples, the API wrongly labels a noisy image of a teapot as “biology,” a noisy image of a house as “ecosystem” and a noisy image of an airplane as “bird.” In all cases, the original object is easily recognizable from noisy images. screen-shot-2017-04-25-at-10-55-28-am The fragility of the AI system can have negative consequences. For example, a search engine based on the API may suggest irrelevant images to users, or an image filtering system can be bypassed by adding noise to an image with inappropriate content. “Such vulnerability of the image analysis system undermines its usability in real-world applications,” said lead author and UW electrical engineering doctoral student Hossein Hosseini. “It’s important to design the system such that it works equally well in adversarial scenarios.” Researchers are a part of the UW’s Network Security Lab, which was founded Professor Poovendran in 2001. The lab works on the foundations of science and cybersecurity in critical networks. Their research also investigates development robust and resilient systems, like the Google AI, that need to function in adversarial environments for a wide range of applications. The research is funded by the National Science Foundation, Office of Naval Research and Army Research Office. More News:  [post_title] => UW Security Researchers Show that Google’s AI Tool for Image Analysis Cannot Cope with Noise [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => uw-security-researchers-show-that-googles-ai-tool-for-image-analysis-cannot-cope-with-noise [to_ping] => [pinged] => [post_modified] => 2017-05-01 11:39:32 [post_modified_gmt] => 2017-05-01 18:39:32 [post_content_filtered] => [post_parent] => 0 [guid] => http://www.ee.washington.edu/?post_type=spotlight&p=10492 [menu_order] => 28 [post_type] => spotlight [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 10049 [post_author] => 12 [post_date] => 2017-02-28 13:24:49 [post_date_gmt] => 2017-02-28 21:24:49 [post_content] => [caption id="attachment_10052" align="alignleft" width="434"]nsl-perspective_team-photo_2 The UW electrical engineering research team includes (left to right) Professor and Chair Radha Poovendran, doctoral student Hossein Hosseini, Assistant Professor Baosen Zhang and Assistant Professor Sreeram Kannan (not pictured.).[/caption]

University of Washington electrical engineering researchers have shown that Google’s new machine learning-based system to identify toxic comments in online discussion forums can be bypassed by simply misspelling or adding unnecessary punctuation to abusive words, such as “idiot” or “moron.”

Perspective is a project by Google’s technology incubator Jigsaw, which uses artificial intelligence to combat internet trolls and promote more civil online discussion by automatically detecting online insults, harassment and abusive speech.  The company launched a demonstration website on Feb. 23 that allows anyone to type in a phrase and see its “toxicity score” — a measure of how rude, disrespectful or unreasonable a particular comment is.

In a paper posted Feb. 27 on the e-print repository arXiv, the UW electrical engineers and security experts demonstrated that the early stage technology system can be deceived by using common adversarial tactics. They showed one can subtly modify a phrase that receives a high toxicity score so that it contains the same abusive language but receives a low toxicity score.

Given that news platforms such as The New York Times and other media companies are exploring how the system could help curb harassment and abuse in online comment areas or social media, the UW researchers evaluated Perspective in adversarial settings. They showed that the system is vulnerable to both missing incendiary language and falsely blocking non-abusive phrases.

“Machine learning systems are generally designed to yield the best performance in benign settings. But in real-world applications, these systems are susceptible to intelligent subversion or attacks,” said senior author Radha Poovendran, chair of the UW electrical engineering department and director of the Network Security Lab. “We wanted to demonstrate the importance of designing these machine learning tools in adversarial environments. Designing a system with a benign operating environment in mind and deploying it in adversarial environments can have devastating consequences.”

To solicit feedback and invite other researchers to explore the strengths and weaknesses of using machine learning as a tool to improve online discussions, Perspective developers made their experiments, models and data publicly available along with the tool itself.

In the examples below on hot-button topics of climate change, Brexit and the recent U.S. election — which were taken directly from the Perspective API website — the UW team simply misspelled or added extraneous punctuation or spaces to the offending words, which yielded much lower toxicity scores. For example, simply changing “idiot” to “idiiot” reduced the toxicity rate of an otherwise identical comment from 84% to 20%.

nsl-google-perspective_graphic-1

In the examples below, the researchers also showed that the system does not assign a low toxicity score to a negated version of an abusive phrase.

nsl-google-perspective_graphic-2

The researchers also observed that the duplicitous changes often transfer among different phrases — once an intentionally misspelled word was given a low toxicity score in one phrase, it was also given a low score in another phrase. That means an adversary could create a “dictionary” of changes for every word and significantly simplify the attack process.

“There are two metrics for evaluating the performance of a filtering system like a spam blocker or toxic speech detector; one is the missed detection rate and the other is the false alarm rate,” said lead author and UW electrical engineering doctoral student Hossein Hosseini. “Of course scoring the semantic toxicity of a phrase is challenging, but deploying defensive mechanisms both in algorithmic and system levels can help the usability of the system in real-world settings.”

The research team suggests several techniques to improve the robustness of toxic speech detectors, including applying a spellchecking filter prior to the detection system, training the machine learning algorithm with adversarial examples and blocking suspicious users for a period of time.

“Our Network Security Lab research is typically focused on the foundations and science of cybersecurity,” said Poovendran, the lead principal investigator of a recently awarded MURI grant, of which adversarial machine learning is a significant component. “But our expanded focus includes developing robust and resilient systems for machine learning and reasoning systems that need to operate in adversarial environments for a wide range of applications.”

Co-authors include UW electrical engineering assistant professors Sreeram Kannan and Baosen Zhang.

The research is funded by the National Science Foundation, the Office of Naval Research and the Army Research Office.

More News:

[post_title] => UW Security Researchers Show that Google’s AI Platform for Defeating Internet Trolls Can be Easily Deceived [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => uw-security-researchers-show-that-googles-ai-platform-for-defeating-internet-trolls-can-be-easily-deceived [to_ping] => [pinged] => [post_modified] => 2017-05-01 16:55:08 [post_modified_gmt] => 2017-05-01 23:55:08 [post_content_filtered] => [post_parent] => 0 [guid] => http://www.ee.washington.edu/?post_type=spotlight&p=10049 [menu_order] => 32 [post_type] => spotlight [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 5090 [post_author] => 15 [post_date] => 2016-07-07 18:01:56 [post_date_gmt] => 2016-07-07 18:01:56 [post_content] => ScreenShot2016-06-16at2.50.43PMUniversity of Washington Electrical Engineering Chair Radha Poovendran was honored with a 2016 ECE Distinguished Alumni Award from the University of Maryland (UMD). Professor Poovendran was one of three recipients awarded with the honor. The award was presented to him by his PhD advisor, Professor John Baras, at the UMD award ceremony on May 20. The Electrical and Computer Engineering (ECE) Distinguished Alumni Award was established in 2012 to recognize alumni who have made significant and meritorious contributions to their fields. Poovendran was nominated and honored for his major influence to the science and engineering field of cyber security. "Radha obtained significant and breakthrough results in his PhD thesis and early in his career on the security of group communications, the most common form of communications and data exchanges in the Internet,” Baras said on his nomination of Poovendran. “Radha developed an outstanding research program with several novel theoretical and practical results on network security. Additionally, Radha is a leading contributor to the emerging foundations of the Science of Security." Poovendran received his PhD in electrical and computer engineering from the University of Maryland in 1999. He attributes many of his achievements today to having an excellent faculty advisor in Baras, as well as his own outstanding students at UW EE and great colleagues and collaborators at the UW and other organizations. “This recognition is very personal, because it was decided by the faculty who helped and mentored me to be who I am today,” Poovendran said. “The event itself had a fun atmosphere, with faculty and staff who were part of my student life at the University of Maryland.” [post_title] => Radha Poovendran Receives Distinguished Alumni Award [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => radha-poovendran-receives-distinguished-alumni-award [to_ping] => [pinged] => [post_modified] => 2016-09-28 22:47:39 [post_modified_gmt] => 2016-09-28 22:47:39 [post_content_filtered] => [post_parent] => 0 [guid] => http://hedy.ee.washington.edu/?post_type=spotlight&p=5090 [menu_order] => 104 [post_type] => spotlight [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 4801 [post_author] => 15 [post_date] => 2016-04-27 21:45:53 [post_date_gmt] => 2016-04-27 21:45:53 [post_content] => 2016-04-25_AlumniBreakfast The UW EE Community celebrated our alums this weekend at Discovery Days on Saturday, April 23, 2016. Engineering Discovery Days is a two-day event sponsored by the College of Engineering allowing students and faculty from all UW Engineering departments to share their work with students and teachers from area schools, families and the community. We were thrilled to host our UW EE alums and their families for breakfast on Saturday morning. More than 80 alumni attendees and their family members were in attendance. “We are delighted to have had alumni and their families join us for the event this past weekend,” said UW EE Chair Radha Poovendran. “We were excited to create a forum to allow families and alumni to mix together casually and hear about great new things happening in the department.” Poovendran presented an update about UW EE’s exciting new engineering entrepreneurial capstone courses, and also provided information about the effort to change the department name to correctly reflect UW EE’s computer engineering efforts. UW EE is continuing to plan alumni gatherings both in the Seattle region and in other areas of the country.  “I am personally looking forward to having more alumni events to closely connect the department and the community,” Poovendran said. Thanks to our UW EE alums for coming out and connecting with us this weekend! We hope to see you again next year for this fantastic annual gathering. You can view the full photo album from this event on Flickr.   [post_title] => UW EE Hosts Alumni Breakfast for UW Discovery Days [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => uw-ee-hosts-alumni-breakfast-for-uw-discovery-days [to_ping] => [pinged] => [post_modified] => 2017-03-21 15:04:25 [post_modified_gmt] => 2017-03-21 22:04:25 [post_content_filtered] => [post_parent] => 0 [guid] => http://hedy.ee.washington.edu/?post_type=spotlight&p=4801 [menu_order] => 110 [post_type] => spotlight [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 4767 [post_author] => 15 [post_date] => 2016-04-10 22:42:43 [post_date_gmt] => 2016-04-10 22:42:43 [post_content] => To protect against a new type of continuous computer hacking attack, known as advanced persistent threats, a research team led by Department Chair Radha Poovendran has received a five-year $7.5 million Department of Defense Multidisciplinary University Research Initiative (MURI) grant. The highly competitive grant is one of 23 MURI awards, totaling more than $162 million, that support interdisciplinary research by teams of investigators in various science and engineering disciplines. The grants support research that has the potential to improve the nation’s security and expand military capabilities. “Unlike conventional viruses, these threats exploit vulnerabilities and persist over a very long time and they’re very difficult to detect,” said principal investigator Radha Poovendran, chair of the UW Department of Electrical Engineering and Director of the Network Security Lab, which he founded in 2001. “Right now, there is no good understanding of the interactions in these complex cyberattacks, or how to mitigate them.” The UW-led MURI team also includes co-investigator and electrical engineering associate professor Maryam Fazel and researchers from the University of California, Berkeley; the University of California, Santa Barbara; Georgia Tech and the University of Illinois. The award was granted through the Office of Naval Research. Initial research efforts were also funded by the National Science Foundation’s (NSF) Cyber-Physical Systems Program, administered by NSF Program Director David Corman. The research team will develop a novel game theory framework to address the continuous computer hacking attacks, which are essentially a game played between the system and adversary, where each is constantly trying to outsmart the other. A unique trait of advanced persistent threats is that they consist of a variety of different attacks over time. Economic game theory, which most modeling methods are grounded in, does not work well in this type of attack. To develop the new framework, the researchers will use a combination of statistical modeling, adaptive game theory, machine learning and control and systems theory. They plan to model the strategic interactions between the malware attacks and develop a methodology to determine which side is “gaining” or “losing” in the attack, which will enable the system to know when to activate a specific defense. See Also: [post_title] => UW EE Wins $7.5 Million MURI Grant to Defend Advanced Cyberattacks [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => uw-ee-wins-7-5-million-muri-grant-to-defend-advanced-cyberattacks [to_ping] => [pinged] => [post_modified] => 2016-11-22 16:08:38 [post_modified_gmt] => 2016-11-23 00:08:38 [post_content_filtered] => [post_parent] => 0 [guid] => http://hedy.ee.washington.edu/?post_type=spotlight&p=4767 [menu_order] => 118 [post_type] => spotlight [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 6 [current_post] => -1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 10263 [post_author] => 12 [post_date] => 2017-04-03 14:20:47 [post_date_gmt] => 2017-04-03 21:20:47 [post_content] => [caption id="attachment_10269" align="alignleft" width="415"]google-api_1-2 Graduate student Baicen Xiao, Professor and Chair Radha Poovendran and graduate student Hossein Hosseini.[/caption]

Security researchers in the Department of Electrical Engineering have shown that Google’s new AI tool for videos can be easily tricked by quick video editing. The tool, which uses machine learning to automatically analyze and label video content, can be deceived by inserting a photograph periodically and at a very low rate into videos. After the researchers inserted a quick-playing image of a car into a video about animals, the system returned results suggesting the video was about an Audi instead of animals.

Google recently released its Cloud Video Intelligence API to help developers build applications that can automatically recognize objects and search for content within videos. Automated video annotation would be a breakthrough technology. For example, it could help law enforcement efficiently search surveillance videos, sports fans instantly find the moment a goal was scored or video hosting sites filter out inappropriate content.

Google launched a demonstration website that allows anyone to use the tool. The API quickly identifies and annotates key objects within a video. The API website says the system can be used to “separate signal from noise, by retrieving relevant information at the video, shot or per frame” level.

In a new research paper, doctoral students Hossein Hosseini and Baicen Xiao and Professor Radha Poovendran, demonstrated that the API can be deceived by slightly manipulating the videos. They showed one can subtly modify the video by inserting an image into it, so that the system returns only the labels related to the inserted image.

The same research team recently showed that Google’s machine-learning-based platform designed to identify and filter comments from internet trolls can be easily tricked by typos, misspelling abusive words or adding incorrect punctuation.

“Machine learning systems are generally designed to yield the best performance in benign settings. But in real-world applications, these systems are susceptible to intelligent subversion or attacks,” said senior author Radha Poovendran, chair of the UW electrical engineering department and director of the Network Security Lab in a recent UW Today article. “Designing systems that are robust and resilient to adversaries is critical as we move forward in adopting the AI products in everyday applications.”

The researchers provided an example of the API’s output for a sample video named “animals.mp4,” which is provided by the API website. Google’s tool does indeed accurately identify the video labels.

original

The researchers then inserted the following image of an Audi car into the video once every two seconds. The modification is hardly visible, since the image is added once every 50 video frames, for a frame rate of 25.

car

The following figure shows a screenshot of the API’s output for the altered video. In this example, the Google tool shows with high confidence that the altered video is mostly about the car, instead of the animals.

fake

“Such vulnerability of the video annotation system seriously undermines its usability in real-world applications,” said lead author and UW electrical engineering doctoral student Hossein Hosseini for the article. “It’s important to design the system such that it works equally well in adversarial scenarios.”

“Our Network Security Lab research typically works on the foundations and science of cybersecurity,” said Poovendran for the article, the lead principal investigator of a recently awarded MURI grant, where adversarial machine learning is a significant component. “But our focus also includes developing robust and resilient systems for machine learning and reasoning systems that need to operate in adversarial environments for a wide range of applications.”

The research is funded by the National Science Foundation, Office of Naval Research and Army Research Office.

- -

This news originally appeared in a UW Today article by Jennifer Langston.

More News:

[post_title] => UW Security Researchers Show that Google’s AI Tool for Video Searches Can Be Easily Deceived [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => uw-security-researchers-show-that-googles-ai-tool-for-video-searches-can-be-easily-deceived [to_ping] => [pinged] => [post_modified] => 2017-05-01 17:16:32 [post_modified_gmt] => 2017-05-02 00:16:32 [post_content_filtered] => [post_parent] => 0 [guid] => http://www.ee.washington.edu/?post_type=spotlight&p=10263 [menu_order] => 19 [post_type] => spotlight [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 10 [max_num_pages] => 2 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => 1 [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_posts_page] => [is_post_type_archive] => 1 [query_vars_hash:WP_Query:private] => 2c1945466b353819ccdef3736c084744 [query_vars_changed:WP_Query:private] => 1 [thumbnails_cached] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) ) )
 

Representative Publications

  • Tague, P., Nabar, S., Ritcey, J.A. and Poovendran, R., 2011. Jamming-aware traffic allocation for multiple-path routing using portfolio selection. Networking, IEEE/ACM Transactions on, 19(1), pp.184-194.
  • A. Clark, Q. Zhu, R. Poovendran, T. Basar, Deceptive Routing in Relay Networks, in GameSec 2012, November 5th-6th 2012, Budapest, Hungary.
  • K. Sampigethaya, R. Poovendran, “Aviation Cyber-Physical Systems: Foundations for Future Aircraft and Air Transport," in Proceedings of the IEEE, Jan 2013.
  • A. Clark, K Sun, L. Bushnell, and R. Poovendran, “A Game-Theoretic Approach to IP Address Randomization in Decoy-Based Cyber Defense,” Conference on Decision and Game Theory for Security (GameSec) (Nov 2015).
  • P. Lee, A. Clark, L. Bushnell, and R. Poovendran. "Modeling and Designing Network Defense against Control Channel Jamming Attacks: A Passivity-Based Approach," IEEE Conference on Information Science and Systems (CISS), Workshop on Control of Cyber-Physical Systems, April 2013.
  • A. Clark, Q. Zhu, R. Poovendran, and T. Başar. "An Impact-Aware Defense Against Stuxnet," IFAC American Control Conference (ACC), pp. 4070-4077, June 2013.

Associated Labs

Research Areas

Affiliations

  • IEEE Fellow

Education

  • Ph.D., Electrical and Computer Engineering, 1999
    University of Maryland, College Park
  • M.S., Electrical Engineering, 1992
    University of Michigan, Ann Arbor
  • B.Tech., 1988
    Indian Institute of Technology (IIT), Bombay, India