Keynote 1 | Top
CS/ECE, Purdue University
Research Director of CERIAS, and Director of Cyber Center, Discovery Park
Data Provenance - Concepts and Research Challenges
Date: October 15, 2012, 8:45 AM – 10:00 AM Location: Symphony Ballroom
The notion of data provenance was formally introduced a decade ago and has since been investigated, but mainly from a functional perspective, which follows the historical pattern of introducing new technologies with the expectation that security and privacy can be added later. Despite very recent interests from the cyber security community on some specific aspects of data provenance, there is no long-haul, overarching, systematic framework for the security and privacy of provenance. The importance of secure provenance R&D has been emphasized in the recent report on Federal game-changing R&D for cyber security especially with respect to the theme of Tailored Trustworthy Spaces. Secure data provenance can significantly enhance data trustworthiness, which is crucial to various decision-making processes. Moreover, data provenance can facilitate accountability and compliance (including compliance with privacy preferences and policies of relevant users), can be an important factor in access control and usage control decisions, and can be valuable in data forensics. Along with these potential benefits, data provenance also poses a number of security and privacy challenges. For example, sometimes provenance needs to be confidential so it is visible only to properly authorized users, and we also need to protect the identity of entities in the provenance from exposure. We thus need to achieve high assurance of provenance without comprising privacy of those in the chain that produced the data. Moreover, if we expect voluntary large-scale participation in provenance-aware applications, we must assure that the privacy of the individuals or organizations involved will be maintained. It is incumbent on the cyber security community to develop a technical and scientific framework to address the security and privacy challenges so that our society can gain maximum benefit from this technology. In this talk, we will discuss a framework of theoretical foundations, models, mechanisms and architectures that allow applications to benefit from privacy-enhanced and secure use of provenance in a modular fashion. After introducing the main components of such a framework and the notion of provenance life cycle, we will discuss in details research questions and issues concerning each such component and related approaches, and report preliminary research results.
Elisa Bertino is professor of computer science at Purdue University, and serves as Director of Purdue Cyber Center and Research Director of the Center for Information and Research in Information Assurance and Security (CERIAS). Prior to joining Purdue, she was a professor and department head at the Department of Computer Science and Communication of the University of Milan. She has been a visiting researcher at the IBM Research Laboratory (now Almaden) in San Jose, at the Microelectronics and Computer Technology Corporation, at Rutgers University, at Telcordia Technologies. Her recent research focuses on database security, digital identity management, policy systems, and security for web services. She is a Fellow of ACM and of IEEE. She received the IEEE Computer Society 2002 Technical Achievement Award and the IEEE Computer Society 2005 Kanai Award. She a member of the editorial board of IEEE Transactions on Dependable and Secure Computing, and IEEE Security & Privacy. She is c urrently serving as chair of the ACM Special Interest Group on Security, Audit and Control (ACM SIGSAC).
Keynote 2 | Top
Cisco Fellow, Vice President, Head of the Advanced Architecture and Research Organization at Cisco Systems
Collaboration and the Internet of Things
Date: October 16, 2012, 8:30 AM – 9:40 AM Location: Symphony Ballroom
The "Internet of Things" term describes the growing expansion of connectivity towards "things", sensors, machines, appliances of all kinds, but will also stimulate an evolution of the connectivity of people to people, and people to things in new and creative ways. New communication paradigms, including Delay Tolerant, Ad Hoc modes of communication, will become more widespread. Computing and storage will be forced to become more distributed, pervasive. As a consequence, new and expanded modes of collaboration will become possible. In this talk, we will explore some of the implications of this exciting technology transitions.
Flavio Bonomi is a Cisco Fellow, Vice President, and is the Head of the Advanced Architecture and Research Organization at Cisco Systems, in San Jose, California.
He is co-leading (with JP Vasseur) the vision and technology direction for Cisco’s Internet of Things initiative. This broad, Cisco-wide initiative encompasses major verticals, including Energy, Connected Vehicle and Transportation, Connected Cities. In this role, with the support of his team, he is shaping a number of research and innovation efforts relating to mobility, security, communications acceleration, distributed computing and data management.
Before joinig Cisco in 1999, Flavio Bonomi was at AT&T Bell Labs, between 1985 and 1995, with architecure and research responsibilities, mostly relating to the evolution of the ATM technology, and then was Principal Architect at two Silicon Valley startups, ZeitNet and Stratum One.
Flavio Bonomi received a PhD in Electrical Engineering in 1985, and a Master of Electrical Engineering in 1981 from Cornell University in Ithaca, New York. He received his Electrical Engineering Degree from Pavia University in Italy.
Keynote 3 | Top
Director of the Information Engineering Laboratory, CSIRO, Australia
Open Big Data Solutions for the Internet of Things
Date: October 16, 2012, 1:00 PM - 2:15 PM Location: Symphony Ballroom
The Internet of Things (IoT) is a dynamic global information network consisting of internet-connected objects, such RFIDs, sensors, actuators, as well as other instruments and smart appliances that are becoming an integral component of the future internet. Currently, such internet connected objects or “things” outnumber both people and computers connected to the internet and their population is expected to grow to 50 billion in the next 5 to 10 years. To be able to develop IoT applications, such objects must become dynamically integratable into emerging information networks supported by architecturally scalable and economically feasible internet service delivery models, such cloud computing. Therefore, the proliferation of IoT applications has recently given rise to the notion of an IoT cloud. However, there is still no easy way to formulate and manage IoT cloud environments that dynamically discover and integrate internet connected objects and deal with the big data they produce. Furthermore, corresponding elastic pay-as-you-go services currently do not exists.
In this talk we provide an overview of joint research efforts of prominent open source contributors towards enabling a new range of intelligent IoT applications. In particular, we discuss four interrelated research projects that aim to develop an open source software platform that will help springboard IoT application development in academic research institutions and SMEs around the world. This talk focuses primarily on the development of IoT solutions that support dynamic semantic-based discovery and integration of internet connected objects as needed by each application at hand, solutions for distributed stream processing and near real-time exploitation of big IoT data, and corresponding IoT cloud services. We also discuss a recent effort to produce a unified open source middleware framework enabling the dynamic formulation of self-managed cloud environments for IoT applications that will serve as a blueprint for non-trivial IoT applications, which are delivered in an autonomic fashion and according to a cloud computing model. A major case study from the domain of digital agriculture is presented at the end of this talk. This case study illustrates how open source IoT cloud services help achieve higher agricultural productivity by analysing data collected from thousands of sensors and cameras to determine which plant(s) perform better in each farm and by predicting future crop production.
Dimitrios Georgakopoulos is a Research Director at the CSIRO ICT Centre where he heads the Information Engineering Laboratory that is based in Canberra and Sydney. The laboratory has 70 researchers and more than 40 visiting scientists, students, and interns specializing in the areas of Service/Cloud Computing, Human Computer Interaction, Machine Learning, and Semantic Data Management. Dimitrios is also an Adjunct Professor at the Australian National University. Before coming to CSIRO in October 2008, Dimitrios held research and management positions in several industrial laboratories in the US. From 2000 to 2008, he was a Senior Scientist with Telcordia, where he helped found Telcordia’s Research Centers in Austin, Texas, and Poznan, Poland. From 1997 to 2000, Dimitrios was a Technical Manager in the Information Technology organization of Microelectronics and Computer Corporation (MCC), and the Chief Architect of MCC’s Collaboration Management Infrastructure (CMI) consortial project. From 1990-1997, Dimitrios was a Principal Scientist at GTE (currently Verizon) Laboratories Inc. Dimitrios has received a GTE (Verizon) Excellence Award, two IEEE Computer Society Outstanding Paper Awards, and was nominated for the Computerworld Smithsonian Award in Science. He has published more than one hundred journal and conference papers. Dimitrios is the Vice-Chair of the 12th International Semantic Web Conference (ISWC 2013), Sydney, Australia, 2013. In 2011, Dimitrios was the General chair of the 12th International Conference on Web Information System Engineering (WISE), Sydney, Australia, and the 7th International Conference on Collaborative Computing (CollaborateCom), Orlando, Florida, October 2011. In 2007, he was the Program Chair of the 8th WISE in Nancy France, and the 3rd CollaborateCom in New York, USA. In 2005, he was the General chair of the 6th WISE in New York. In 2002, and he served as the General Chair of the 18th International Conference on Data Engineering (ICDE) in San Jose, California. In 2001, he was the Program Chair of the 17th ICDE in Heidelberg, Germany. Before that he was the Program Chair of 1st International Conference on Work Activity Coordination (WACC) in San Francisco, California, 1999, and has served as Program Chair in a dozen smaller conferences and workshops.
CBig 2012 Keynote | Top
Carnegie Mellon University
Mining Billion-Node Graphs - Patterns and Scalable Algorithms
Date: October 14, 2012, 9:00 AM – 10:00 AM Location: Symphony Ballroom
How do graphs look like? How do they evolve over time? How do rumors and viruses propagate on real graphs? We review some static and temporal 'laws', fast algorithms to spot deviations and outliers, and recent developments on virus propagation and scalable tensor analysis.
Christos Faloutsos is a Professor at Carnegie Mellon University. He received the Research Contributions Award in ICDM 2006, the SIGKDD Innovations Award (2010), eighteen
best paper'' awards (including two
test of time'' awards), and four teaching awards. He is an ACM Fellow, he has published over 200 refereed articles, 11 book chapters and one monograph. He holds six patents and he has given over 30 tutorials and over 10 invited distinguished lectures. His research interests include data mining for graphs and streams, fractals, database performance, and indexing for multimedia and bio-informatics data.
CCSocialComp 2012 Keynote | Top
Carnegie Mellon University
Challenges Arising from Analyzing Dynamic Social Computing Networks
Date: October 14, 2012, 10:45 AM – 12:00 AM Location: Symphony Ballroom
Users in collaborative communities as well as in other social computing environments create a lot of data when their activities are tracked in databases. A very common approach to analyze these relational data is to make use of methods and tools from the field of Social Network Analysis. Network models of nodes (representing users) and links (representing collaboration or any other form of interaction between users) can be extracted from any social computing environment. Consequently, the following research questions are typically asked. Who is important in the network? Where are the groups of collaboration? Which users form bridges between groups of collaboration and can, therefore, amplify knowledge exchange? Answering these questions can help to better understand the structure and dynamics of collaborative communities and to improve application of collaborative tools for social computing. When taking a closer look on analyzing social computing networks, computational as well as theoretical challenges arise due to the nature of the data. For instance, networks extracted from social computing activities represent aggregations of many actions within a specific time period. This can lead to computational artifacts because aggregating different time periods can result in different network structures. Even more, aggregated data are weighted, e.g., multiple interactions between users are coded as links with different weights, while many of the widely used metrics in network analysis do not take link weights into account. In addition, social computing networks are often very large as thousands or millions of users interact worldwide. In opposite, many of the common metrics for analyzing social networks were developed for groups consisting of a few tens of people. This results not just in scaling issues, but also requires serious theoretical considerations about interpretation of measures. In this talk I will discuss some challenges that arise from analyzing dynamic social computing networks and offer approaches to tackle these challenges.
Jürgen Pfeffer's research combines traditional network analysis and dynamic network analysis theories and methods with up-to-date science from the areas of visual analytics, geographic information systems, system dynamics, and data mining. His research focus lies in the computational analysis of organizations and societies with a special emphasis on large-scale systems. He is particularly interested in methodological and algorithmic questions as well as challenges arising from analyzing such systems. Pfeffer develops new algorithms to model and calibrate interpersonal communication networks as well as online social networks. This will help to better model and predict diffusion processes of information, opinions, and beliefs. He also works on the optimization of existing network measures since most popular centrality measures in social network analysis (closeness centrality, betweenness centrality) are not scalable for very large networks. On the other hand, his research projects also deal with the analysis of emerging conflicts, e.g. in Northern African countries or the Middle East. His main interest in these projects is to describe and to detect change within the networks consisting of persons, organizations, subjects, and companies of specific countries as well as the involvement of other countries and international organizations. Dr. Pfeffer is an Assistant Research Professor in the School of Computer Science at Carnegie Mellon University.
TrustCol 2012 Keynote | Top
Carnegie Mellon University
SafeSlinger: Easy-to-Use and Secure Public-Key Exchange
Date: October 14, 2012, 9:00 AM – 10:15 AM Location: Symphony Ballroom
Users regularly experience a crisis of confidence on the Internet. Is that email or instant message truly originating from the claimed individual? Such doubts are commonly resolved through a leap of faith, expressing the desperation of users.
To establish a secure basis for online communication, we propose SafeSlinger, a system leveraging the proliferation of smartphones to enable people to securely and privately exchange their public keys. Through the exchanged authentic public key, SafeSlinger establishes a secure channel offering secrecy and authenticity, which we use to support secure messaging and file exchange. Essentially, we support an abstraction to safely "sling" information from one device to another. SafeSlinger also provides an API for importing applications' public keys into a user's contact information. By slinging entire contact entries to others, we propose secure introductions, as the contact entry includes the SafeSlinger public keys as well as other public keys that were imported. We present the design and implementation of SafeSlinger, which has been implemented for Android and iOS and is available from their respective app stores.
Adrian Perrig is a Professor in Electrical and Computer Engineering,Engineering and Public Policy, and Computer Science at Carnegie Mellon University. Adrian serves as the technical director for Carnegie Mellon's Cybersecurity Laboratory (CyLab). He earned his Ph.D. degree in Computer Science from Carnegie Mellon University, and spent three years during his Ph.D. degree at the University of California at Berkeley. He received his B.Sc. degree in Computer Engineering from the Swiss Federal Institute of Technology in Lausanne (EPFL). Adrian's research revolves around building secure systems and includes network security, trustworthy computing and security for social networks. More specifically, he is interested in trust establishment, trustworthy code execution in the presence of malware, and how to design secure next-generation networks. More information about his research is available on Adrian's web page. He is a recipient of the NSF CAREER award in 2004, IBM faculty fellowships in 2004 and 2005, the Sloan research fellowship in 2006, the Security 7 award in the category of education by the Information Security Magazine in 2009, and the Benjamin Richard Teare teaching award in 2011.