ABSTRACTS & SHORT BIOs (alphabetic order)


Stefania Bandini

Department of Computer Science, Systems and Communication - University of Milano-Bicocca (Italy)


SHORT BIO - Full Professor of Computer Science at the University of Milano-Bicocca (ITALY). Fellow at The University of Tokyo, RCAST - Research Center for Advanced Science and Technology (JAPAN). Director of the CSAI - Complex Systems & Artificial Intelligence Research Center and Director of the PhD School in Computer Science at the Department of Computer Science, Systems and Communication (University of Milano-Bicocca). Research Activity: Artificial Intelligence (Knowledge Representation, Engineering and Management); Complex Discrete Dynamical Systems (Cellular Automata, Multi-agent System. Many research activities concern the study and the application of the dynamical properties of Cellular Automata, and related extensions (Multilayered Automata Networks and Cellular Agents) biology, chemistry, physics, urban planning, design of distributed lighting systems in public spaces and models of Collective Intelligence. From 2004 she focused her activity in the field of crowds and pedestrians modeling and simulation to support crowd management in public spaces, recently focusing on the simulation of dynamical formation of groups into crowded spaces and the related influence of multicultural issues. From 2009 she extended her research also towards activity in the field of Mobility and Ageing Society. She co-chairs the Working Group “Ageing Society” of the Italian Association for Artificial Intelligence.  http://www.csai.disco.unimib.it/CSAI/Bandini


Collective, big and resilient: from crowds to communities

ABSTRACT - Safety and security are key issues in the design of the future of citizens, everywhere in the world. Many steps forward have been developed in the computer based simulation systems to support crowd managers and decision makers to increase the security and the perception of safety, but we are at the beginning. The integration of this new generation of technologies and services with the new possibilities offered by big-data and Internet-of-things forces the research towards the consideration of novel conceptual scenarios, like “community” and ”resilience”. In this lecture the concept of “resilience” will be illustrated, in order to show the need to overcome the notion of crowd towards the definition of community, in order to consider the dynamical structures of collective intelligence.



Simone Bianco

Department of Computer Science, Systems and Communication - University of Milano-Bicocca (Italy)


SHORT BIO - Simone Bianco obtained the PhD in Computer Science at DISCo (Dipartimento di Informatica, Sistemistica e Comunicazione) of the University of Milano-Bicocca, Italy, in 2010. He obtained the BSc and the MSc degree in Mathematics from the University of Milano-Bicocca, Italy, respectively in 2003 and 2006. He is currently an assistant professor and his research interests include computer vision, machine learning, optimization algorithms, and color imaging.


Recognition and description of visual data

ABSTRACT - The automatic recognition and description of visual data is a challenging problem in multimedia and computer vision, with a huge variety of applications.  In this section I will talk about the use of deep neural networks to extract rich scene annotation and description from visual data. Several examples, mainly related to image and video recognition in large multimedia archives, and to automatic computation of their attributes will be discussed. 



Carlo Biasior

Center for Research and Protection of Consumers, Trento - ITALY


SHORT BIO - I have been working for ten years as a Consumer Advisor and Board Member in the biggest Italian consumer association Altroconsumo (Milano, Italy) and as Lawyer in few law firms. Now I am Director of the CRTCU, a Research Center for Protection of Consumers (Trento, Italy). After obtaining a degree in law, I did a Masters in Consumer Protection Law (LUISS, Roma, Italy) and a Ph.D. programme in the field of Comparative Private Law. The Ph.D. programme improved my knowledge of the European and national private law in a comparative approach, with main topics in Market law, Competition law, Contracts law and Industrial law. My past experience in working on several European projects, (Co-financed by the European Commission – DG Sanco), allowed me to learn vital skills in Project Management and Coordination with International Partners. 


Data from the market for the market: consumers, companies, and decision-makers. 

The case of the Veneto banks

ABSTRACT - The resilience of the market analyzed through the data of the two Veneto banks, Banca Popolare di Vicenza and Veneto Banca. The nature of the phenomenon, the data, the reactions of the banks, of consumers and their families, and, above all, of the decision makers, both regulatory and political bodies. My presentation wants to emphasize the state-of-the-art of banks and finance products on the market, and introduce potential actions to be taken for guaranteeing stability to consumers, their families and the social system, all of whom are crucial and precious contributors to the market.



Federico Cabitza

Department of Computer Science, Systems and Communication - University of Milano-Bicocca (Italy)


SHORT BIO - Federico CabitzaMSc PhD, got his master degree at the Politecnico of Milan in 2001. From 2001 to 2004 he worked as a software analyst in the private sector. In February 2007 he received the PhD in computer science with a thesis on computational models and architectures supporting collaboration and knowledge work in corporate domains. Currently, he is an assistant professor at the Università degli Studi di Milano-Bicocca where he teaches computing formalisms, socio-technical analysis of collaborative systems and knowledge management. His current research interests regard ICT-oriented domain analysis, user requirement elicitation and prioritization, and the design of computer-based support for cooperative work and knowledge management, with a focus on applications for the medical domain.


Building collective knowledge from the knowledge of the collective. 

Simple methods for complex decisions

ABSTRACT - Collective knowledge is an elusive concept, unless it is traced back to the knowledge of the single members of a collective. It is common to take an objectivistic stance to knowledge. Alternatively, also a situativistic stance can be adopted, which relates it to knowledgeable decisions (e.g., answers) taken in some scenarios and activities. Our research shows that posing problems and asking related questions to the members of a collective, even if these are novices and of low expertise, their "average" answers are as much knowledgeable as those given by high-expertise respondents (who are not included in the above collective). This is accomplished with a relatively low-budget technique, the structured questionnaire, and through methodologically sound analysis of the responses, including novel methods to process ordinal data. Implications in elusive and hard-to-define topic areas (as conference ranking) and critical and extremely knowledge-intensive domains (like medicine) will be discussed and illustrated with real-life cases. This research opens interesting scenarios also in collaborative human-machine decision making, which is collective knowledge augmented by AI-enabled decision support systems.




Mirko Cesarini

University of Milan-Bicocca (Italy)

SHORT BIO - Mirko Cesarini is an Assistant Professor in Computer Science at the University of Milan Bicocca, department of Statistics and Quantitative Methods. He was awarded a PH.D. in Computer Science from Politecnico di Milano and has published several papers in international journals and conference proceedings. He is in the program committee of several international journals and conference. His research activities focus on Data Quality, Natural Language Processing, Text Classification, Machine Learning, and Information Systems. He has worked on several international research projects on managing information about the job market, e.g., the European the European project SEEMP - Single European Employment Market Place (IST-4-027347-STP) and the CEDEFOP funded project “Real-time Labour Market Information on Skill Requirements: feasibility study and working prototype.


Introduction to Machine Learning and Applications

ABSTRACT - In recent years, machine learning techniques have been receiving an increasing attention from both the academy and the practitioner communities. Machine learning is currently used in several scenarios, e.g., autonomous car driving, text classification, image detection just to name a few. Most of the recent achievements rely on the ability of artificial intelligence algorithms to classify huge masses of data towards a high level of abstraction, to autonomously learn classification models, and to identify unknown patterns from data. The talk will give the theoretical foundations and the knowledge required to address a problem using machine learning techniques. The key elements to address a classification problem will be outlined namely the preparation of the train, validation, and test datasets, feature identification and selection, parameters tuning via grid-search, metrics for performance evaluation, and hints for managing projects involving machine learning tasks. Some use-cases will be presented, e.g., the methodologies and techniques used in a European funded project focusing on the Labour Market Intelligence field. A short overview of recent research findings will be given.



Florian Coulmas

IN-EAST Institute of East Asian Studies, University if Duisburg Essen (Germany)


SHORT BIO - Florian Coulmas has been the Director of the German Institute for Japanese Studies (Tokyo) since 2004 until 2014, and Professor of Japanese Studies at the Institute of East Asian Studies, University Duisburg-Essen. He has taught and researched in various environments, including the National Language Research Institute (Tokyo), Georgetown University, and Chuo University, Tokyo. His publications include more than a dozen monographs. He has been a regular contributor to a number of newspapers including the Japan Times, Neue Zürcher Zeitung, and Süddeutsche Zeitung. Among his most recent books on the subjects of population ageing are: Population Decline and Aging in Japan: The Social Consequences (Routledge, 2007), and Sociolinguistics: The Study of Speakers' Choices (Cambridge University Press, 2006). His research interests include ethnochronology, that is, the cultural study of time, and the study of happiness. At now he works at the IN-EAST Institute of East Asian Studies of the University of Duisburg-Essen (Germany). 


How multilingual is the Internet?

ABSTRACT In 2011 the UN declared access to the internet as a basic human right. Nothing was said, however, about the language(s) that can be used for this purpose. English, Italian, Japanese, Chinese – what about the rest? This paper is about multilingualism on the Internet. It explores the difficulties of determining the presence and relative weight (frequency of use) of different languages in the net, from a technical as well as from a linguistic point of view. It discusses the questions of what multilingualism means for the users of the net and what the global communications network means for the diversity of human languages.  



Elisabetta Fersini

Department of Computer Science, Systems and Communication - University of Milano-Bicocca (Italy)


SHORT BIO - Elisabetta Fersini, PhD in Computer Science, is currently assistant professor at the University of Milano-Bicocca (Department of Informatics, Systems and Communications). Her research is mainly focused on propositional and relational machine learning, with a particular interest on probabilistic discriminative and generative models. Her activity finds application in several natural language processing tasks, among which text classification and clustering, information extraction and sentiment analysis both for well-formed text and social media contents.


Deep learning in natural language processing

ABSTRACT - Recently, deep learning approaches have obtained promising results across many different NLP applications. Deep learning has in particular aimed at handling efficiently huge amount of texts in an unsupervised setting by capturing, in a intuitive way, the complexity of natural language. We show how deep learning goes beyond the traditional “bag of words” representation by constructing a so-called "neural embedding" or vector space representation of each word or document. We illustrate how this representation can be exploited for different text analytics tasks. 





Andrea Gorrini

Department of Computer Science, Systems and Communication - University of Milano-Bicocca (Italy)


SHORT BIO - Andrea Gorrini  is a postdoctoral research fellow at the Complex Systems & Artificial Intelligence Research Center (Department of Computer Science, Systems and Communication, University of Milano-Bicocca)  His major field of research interest is the investigation of collective dynamics, focused on pedestrian crowd phenomena, proxemics and group behavior. He is particularly skilled in data collection (on field observation, experiment in laboratory setting) and analysis (video tracking, statistics). He holds a Ph.D. Degree in Sociology and M.Sc./B.Sc. Degrees in Organizational Psychology. As a Ph.D. visiting student, he spent a JSPS research period as JSPS Fellow at the Research Center for Advanced Science and Technology (The University of Tokyo) and at the Institute of Transport Studies (Faculty of Civil Engineering, Monash University, Australia). He is author of several scientific papers, published in International Journals and Conferences.



Marco Guerzoni

Department of Economics and Statistics “Cognetti de Martiis”, University of Torino (Italy)


SHORT BIO - Marco Guerzoni is associate professor of applied economics at the University of Turin (Department of Economics and Statistics "Cognetti de Martiis"), where he teaches Data-Journalism, Entrpreneurship, Economics of Innovation and Technology Policy, Machine Learning for Marketing. His research area covers management and economics of innovation, technology policy, economics of culture. He has been recently working on the methodological implications of big-data and machine learning for business and in social science with a focus on the issues of model selection, inference, and hypotheses mining. He has published in international journal such as for instance Research Policy, Cambridge Journal of Economics, Italian Journal of Applied Statistics, Journal of Evolutionary Economics. He is also member of BRICS-Collegio Carlo Alberto and ICRIOS-Università Bocconi.


Unsupervised algorithms for text mining: the topic modeling

ABSTRACT - In this section, we present topic modelling as a method to extrapolate information for a corpus of text. We show a method to analyze the evolution of the semantic content of a scientific field. The main idea is a translation of the qualitative theorizing about the evolution of knowledge in philosophy of science into a mathematical model, which can be further encoded into an algorithm. We test the algorithm both on a controlled experiment and on a corpus of several thousands of scientific papers over a period of more than 100 years in the discipline of economics. We strongly recommend students to bring a laptop during this presentation.


Supervised techniques for consumer choices

ABSTRACT - In this section, we discuss the challenges and the opportunities posed by the recent availability of big data at the individual to analyze consumer choices.  The most common approaches are based on empirical application of the Random Utility Model. When large dataset are available, we suggest as a complementary tool the use of machine learning algorithms to predict . We show result of the analysis of cultural consumption habits of Torino-Musei of card holders on a dataset of about 100 thousands users for more than a million of visits in museums in the Piedmont region. We strongly recommend students to bring a laptop during this presentation.



Hiroko Kudo

Chuo University, Tokyo (Japan)


SHORT BIO - Hiroko Kudo is full professor of public policy and public management at the Faculty of Law of Chuo University (Tokyo, Japan) and Visiting Professor and Research Fellow, German Research Institute for Public Administration at the German University of Administrative Sciences (Speyer). She won the scholarship of Italian Government for the academic year 1992-1993 and spent a year at the Department of Regional Sciences, Milan Polytechnic. She earned her Ph.D. in public policy in 1998 (doctorate at the University of Venice). She taught at Aichi Shukutoku University (1995-1998) and Waseda University (1998-2005) as assistant professor and then associate professor, before assuming the current position in 2005. Since 2001, she has been visiting professor at the University of Rome "La Sapienza", Bocconi University, and the University of Cagliari, among others. She has been working with various master and doctorate courses at Italian, German, Austrian, and Slovenian universities, was visiting research fellow at Economic and Social Research Institute of the Cabinet Office of the Japanese Government. She joined the German University of Administrative Sciences Speyer as guest professor in 2015. In 2015, she also was visiting research professor at Bocconi University (Italy). Her main research topics include: governance theory, performance measurement and policy evaluation; HRM; decentralisation and local governments; electoral system; urban planning and policy; e-Government; ICT and innovation; culture and sport policy; and public administration reform.


Possibilities and issues of collective intelligence - Needs of Big Data governance

ABSTRACT - We have observed a rising attention to and expectation for collective intelligence, which is shared or group intelligence emerging from the collaboration, collective efforts, and competition of many actors (including human and/or artificial). Despite the high expectation, though, collective intelligence has proved to have various issues and obstacles in order to be fully utilised. This research investigates into the theoretical possibilities and practical issues/difficulties of collective intelligence, using several theoretical frameworks of cognate sciences. Why we often fail to use the collective intelligence generated data to design and/or implement public policies? When the issue is addressed from practical viewpoint, we know that there are still difficulties and limitations of big data analysis. At the same time, there are strong attentions and expectations, especially from lawmakers and the government agencies, because of its potentials to make public policy more scientific and robust.

The lecture investigates the attempts to make public policies using collective intelligence and/or Big Data, including AI generated policies, and adopts qualitative analysis, including analysis of primary documents and semi-directive interviews. There is a limitation of single case study: however the case well represents the research question to provide preliminary investigation and to generate hypotheses for further studies. The collection of data is carried out through different sources, including semi-structured interviews to some actors, in order to allow a better accuracy and to ensure the triangulation of the resources used and to understand better the links between the object of investigation and the context. The lecture intends to stimulate discussion on collective intelligence and Big Data.



Daniele Magazzeni

King’s College London (UK)


SHORT BIO - Dr. Daniele Magazzeni is Lecturer in Artificial Intelligence at King's College London, where he leads the AI Planning Group. His research explores the links between Artificial Intelligence and Verification, and the use of AI in innovative applications. Magazzeni is an elected member of ICAPS Executive Council. He serves on the organising and program committee of major AI conferences. He is Editor-in-Chief of AI Communications. He was Conference Chair of ICAPS 2016, is Workshop Chair of IJCAI 2017, and will be chair of the special track in AI and Robotics at ICAPS-18. He is co-investigator in UK and EU projects. Daniele is scientific advisor and has collaborations and consultancy projects with a number of companies and organizations.

The Future of AI and Big-Data for Business: eXplainable and Versatile

ABSTRACT - Investors are all looking at Big-Data and Artificial Intelligence. There is a proliferation of AI startups, where the business is around new services based on Big-Data and AI. There is a terrific competition, with some areas already saturated. At the same time, the use of Machine/Deep Learning techniques is raising significant issues about transparency and understanding of how decisions are made, as these techniques are currently treated almost as black box. For example, it is not possible to understand why alphaGo selected a specific move at each turn; or it is not possible to understand on what basis a neural network recognises an image as "an image of a cat". The newborn discipline of eXplainable Artificial Intelligence (XAI) aims to tackle this challenge, by providing explanations and justifications for Big-Data and AI-based predictions. In the first part of this talk, I will introduce eXplainable Artificial Intelligence (XAI) and present recent advances and latest results. In the second part, I will discuss the future of Big-Data and AI for business, and show how versatility of data is the other key requirement.



Fabio Mercorio

University of Milan-Bicocca (Italy)


SHORT BIO - Fabio Mercorio is Assistant Professor of Computer Science at University of Milan-Bicocca. He received a PhD in Computer Science in 2012 from the University of L’Aquila, Italy. His research interests include Artificial Intelligence Planning, Knowledge Discovery and their mutual connections. He is co-designer and co-developer of UPMurphi, a planner for hybrid systems. He also collaborated in the realisation of DiNo, a heuristic planner based on UPMurphi developed by the AI Planning Group at King's College London. On the Knowledge Discovery side, his main contribution is in the design and development of an automated algorithm for performing Data Quality and Cleaning tasks of KDD through AI Planning.


An overview on NoSQL data stores

ABSTRACT - NoSQL data stores are growing in importance, mainly due to their ability to represent highly unstructured data in a schema-free fashion. This, in turn, enabled the realisation of several tools and solutions that are employed in many business and research applications. The goal of this talk is to introduce the main characteristics of NoSQL paradigms and data-stores (document, key-value, column-family and graph-databases). We will discuss issues related to the use of these data-stores in real applications, such as sharing, replication and consistency, that allow NoSQL databases to scale-out and to represent data through aggregate models as well. Some examples along with best practices to guide the use of these solutions will be also provided.


Graph-databases at a glance (Lab)

ABSTRACT - The purpose of this talk is to get introduced to graph databases. In particular, the talk will give the basis to set up the graph-db from scratch, to fill it with the data and to perform some challenging queries that make it useful for handling data with complex relationships structure (e.g., graph-traversal, graph-matching, some SNA metrics). We strongly recommend students to bring a laptop during this talk.



Katsuhiro Nishinari

Research Center for Advanced Science and Technology – The University of Tokyo (Japan)


SHORT BIO - Katsuhiro Nishinari received Ph.D degrees in aerospace engineering from the University of Tokyo, Japan, in 1995, and became a professor (2009-) at the University of Tokyo. He was a visiting scientist at Institute for Physics, University of Cologne, Germany from 2002 through 2003. He is a member of Physical Society of Japan, Applied Mathematical Society of Japan and Japan Society of Mechanical Engineering, and an editor of the Journal of cellular automaton. He organizes a number of international conferences including ACRI, C&CA and TGF. He works on interdisciplinary topics in applied mathematics, fluid dynamics, and statistical physics. His research interests is “Jamology”, which was created by himself 10 years ago. This is an interdisciplinary research on transportation and jamming phenomena (vehicular traffic, pedestrian motion, queue network and supply chain, etc.) by using applied mathematics. He has published more than eighty research papers in leading international journals, and wrote several books concerning traffic jam and applied mathematics. Latest publication list is available at http://park.itc.u-tokyo.ac.jp/tknishi/info_e.html. His research on traffic jam is selected as a big project of PRESTO, Japan Science and Technology Agency in 2007. He has won awards for his work including “Young Investigators Award of the Japan Society of Mechanical Engineers”(1998) and “The 23th Scientific Publication Award in Japan by the book “Jamology”(in Japanese)” (2007).



Massimiliano Nuccio

DESPINA - Big Data Lab, University of Torino (Italy)


SHORT BIO - Massimiliano Nuccio is Research Director at Despina Big Data Lab Department of Economics and Statistics, University of Turin and Research Associate at Brick, Collegio Carlo Alberto. His research interests extend to the application of analytics to consumer behaviour, information economics and regional studies. He is contract professor in different Universities. Previously,  he has been Marie Curie Fellow at the University of Turin,  Visiting Professor in Culture and Regional Development at the Leuphana Universität Lüneburg and Research Fellow at the ASK Research Centre, Bocconi University Milan. He has published several articles and book’s chapters, including papers on international journals.


From the economics of data to data-driven economics

ABSTRACT - The increased availability of (big) data is posing new and somehow unexpected challenges to social sciences in general, and economics in specific. The aim of this session is to survey and critically evaluate the contribution that a data analytics can bring to the understanding of the economic theory and to the analysis of economic problems. Firstly, we try to put the raise of data-driven economy in the theoretical framework of information economics. Secondly we address the problems of big data and Iot from the perspective of industrial economics. Finally, we compare the analytical approach of data analytics with econometrics to understand differences and common ground for applied economics.


Supervised and Unsupervised Tools in Data Science for Economics

ABSTRACT - In this session we review different families of supervised and unsupervised algorithms according to different types of research problems that can be detected in economics. In particular we explore the characteristics of some algorithms of classification aimed at predicting a target and some unsupervised algorithms design to recognize pattern, cluster and reduce data dimensionality. The lesson includes hints on how to select variables, how to evaluate the performance of a model to obtain reliable estimates and eventually how to compare models to choose the most appropriate and performing.


 


Matteo Palmonari

Department of Computer Science, Systems and Communication - University of Milano-Bicocca (Italy)


SHORT BIO - Matteo Palmonari is Assistant Professor at the Department of Informatics, Systems and Communication of the University of Milan-Bicocca. His research interests include semantic matchmaking, information quality, and knowledge exploration for the semantic web; several of his researches have been applied to service modeling, service matchmaking and e-Government applications. He has been Visiting Postdoc at the ADVIS Laboratory, University of Illinois at Chicago. He is coordinator of the H2020 innovation action EW-Shopp: Supporting Event and Weather-based Data Analytics and Marketing along the Shopper Journey. He has published more than 70 papers in peer-reviewed international journals and conference proceedings.

 

Data semantics for data analytics

ABSTRACT - Data analytics is becoming pervasive in almost every sectors and application domains. However, it has been estimated that most of the effort spent in data analytics projects is allocated to data preparation and integration tasks, before data are actually analyzed. When data that need to be analyzed result from the integration of heterogeneous sources (e.g., tables, texts, databases) and/or these sources come from heterogeneous domains, data management solutions that leverage explicit representation of data semantics are often introduced to ease the integration tasks. In this talk there basic principles of semantic data infrastructures will be discussed, encompassing abstractions used in research as well as in the industry such as Knowledge Graphs, Data Linking, and Data Profiling.



Antonio Picariello

University of Naples “Federico II” & CINI ITEM National Lab (Italy)


SHORT BIO - Antonio Picariello is Associate Professor of Computer Engineering at the University of Naples Federico II, where he currently teach Data Base Systems and Big Data Analytics for undergraduate and graduate students. He is the Director of Consorzio Nazionale Italiano per l’Informatica (CINI) National Laboratory for Computer Science, Networks and Multimedia Systems (ITEM). He received the laurea degree in electronics engineering and the PhD degree in computer science and engineering from the University of Naples “Federico II,” Italy.  He has been active in the field of multimedia databases,  information retrieval and big data analytics. His current research interests include knowledge extraction and management, multimedia integration, multimedia social networks and social media analytics. He is the author of more then 200 papers in the field.


Big Data Architecture Evolution

ABSTRACT - Big Data is still quite a young field and therefore there are no standard architectures available which have been used for years. The talk will be focused in the description of the main software architecture, from Hadoop to Spark Ecosystem, taking into account the related analytics capabilities. Several examples from Social Network Analytics will be also discussed.





Gianluigi Viscusi

École Polytechnique Fédérale de Lausanne (EPFL), College Of Management (CDM), Chair of Corporate Strategy and Innovation (CSI), Lausanne (Switzerland)


SHORT BIO - Research interests: policy and technology innovation, eGovernment, crowdsourcing, science communication, cognitive economy, methodologies for information systems planning, information value, business modelling, IS strategy, design and management of large set of conceptual schemas, Service Science, Social Study of Information Systems.


Crowdsourcing and Innovation: Key Issues and Reseach Paths

ABSTRACT - In this talk we challenge the usual understanding on crowdsourcing, where the number of people define the crowd and maximizing this number is often assumed to be the goal of any crowdsourcing exercise. However, there are structural characteristics of the crowd that might be more relevant than the number of participants. These characteristics include (1) growth rate and its attractiveness to the members, (2) the equality among members, (3) the density within provisional boundaries, (4) the goal orientation of the crowd, and (5) the “seriality” of the interactions between members of the crowd. We then discuss a typology that may allow managers to position their companies’ initiatives among four strategic types: crowd crystals, online communities, closed crowd, and open crowd driven innovation. Then, we analyze the effects on organizations considering different types of innovation related to crowdsourcing, discussing internal crowdsourcing, the concept of crowd capital, and the role of information systems. Finally we outline potential research paths and methods.