In information systems, AI-supported systems have taken on a transformative role, aiding in the automation of business processes, optimization of decisions, and development of innovative business models (Berente et al., 2021). AI is a powerful tool that is no longer confined to predefined scenarios but, through the development of generative AI, enables creative and open application possibilities in information systems (Feuerriegel et al., 2024). In addition to these technological advances, the integration of AI into information systems brings not only technical challenges but also demands the design of systems that consider fairness and transparency alongside efficiency (Schoeffer et al., 2022; von Zahn et al., 2022). With the increasing societal significance of data-driven, AI-based systems, questions are emerging about new governance approaches and an adequate regulatory framework (Fast et al., 2023). From an information systems perspective, it is central to explore how AI can be designed and implemented in alignment with societal values and ethical principles to ensure its responsible use for the common good and broad societal acceptance.
A key aspect in the integration of AI-supported systems is human-computer interaction, to ensure user-friendliness and intuitive usability (Jain et al., 2021). It is crucial to develop interfaces that are not only technically advanced but also comprehensible and human-centered, to enable seamless and effective interaction between users and AI systems (Bauer et al., 2021; Jussupow et al., 2024). Particularly important is the investigation of calibrated trust of humans in AI systems (Schemmer et al., 2023). Moreover, data security and privacy in AI systems are of crucial importance (Dehling & Sunyaev 2023). As AI is rapidly adopted across diverse application domains and takes on a pivotal role in society, it becomes essential to ensure that associated information systems are not only high-performing but also secure and trustworthy.
As part of the track “Information Systems in the Age of AI”, we invite researchers to explore the multifaceted integration and evaluation of AI in information systems, to develop a deep and holistic understanding of the diverse challenges and potentials. We are open to various theoretical approaches, methods, or paradigms and encourage contributions from all areas that deal with the application of AI in various sectors such as healthcare, finance, marketing, or production. Our goal is to provide a platform for innovative, methodologically rigorous, and relevant research work that illuminates not only the technical aspects but also the human facets and societal issues of AI. We encourage contributions that transcend traditional information systems paradigms and integrate interdisciplinary perspectives to thoroughly explore and understand the dynamic and complex nature of AI in information systems.
Possible topics include, but are not limited to:
- Fair AI Policies: Investigations into ethical guidelines and fair use of AI.
- Appropriate Reliance in AI: Analyses of the origin and design of calibrated handling of AI systems.
- AI-based Information Systems: Application of AI in information systems.
- Data Access & Data-driven Business Models: Innovations and challenges in data-driven business models.
- AI & Security: Impacts of AI on information security.
- AI for Process Automation: Automation of business and production processes through AI.
- Human-AI Interaction: Exploration of the interaction between humans and AI, including human-in-the-loop approaches.
- AI & Trust: Building and maintaining trust in AI systems.
- Explainable AI & User Behavior: Research on transparency and comprehensibility and exploring the interactions between explainable AI and user behavior.
We explicitly welcome interdisciplinary work and innovative approaches that go beyond traditional paradigms. The track aims to provide a comprehensive overview of the opportunities and challenges associated with integrating AI into information systems.
Track chairs
AEs
- Christoph Cede, Technical University Vienna
- Johannes Dahlke, University of Twente
- Maximilian Förster, University of Ulm
- Ulrich Gnewuch, University of Passau
- Gunther Gust, University of Würzburg
- Daniel Heinz, Karlsruhe Institute of Technology
- Marc-Fabian Körner, University of Bayreuth
- Mathias Klier, University of Ulm
- Jasmin Lampert, Austrian Institute of Technology
- Juho Lindman, University of Gothenburg
- Andreas Obermeier, University of Ulm
- Alexander Schiller, University of Regensburg
- Timo Sturm, Technical University of Darmstadt
- Moritz von Zahn, University of Frankfurt am Main
- Alona Zharova, Humboldt-University of Berlin
Literature
Bauer, K., O. Hinz, W. van der Aalst, C. Weinhardt (2021). Expl(AI)n it to me – explainable AI and information systems research. Business & Information Systems Engineering, 63, 79–82.
Berente, N., Gu, B., Recker, J., Santhanam, R. (2021). Managing Artificial Intelligence. MIS Quarterly, 45(3), 1433-1450.
Dehling, T., Sunyaev, A. (2023). A Design Theory for Transparency of Information Privacy Practices. Information Systems Research, 35(3), 956-977.
Fast, V., Schnurr, D., Wohlfarth, M. (2023). Regulation of data-driven market power in the digital economy: Business value creation and competitive advantages from big data. Journal of Information Technology, 38(2), 202–229
Feuerriegel, S., Hartmann, J., Janiesch, C., Zschech, P. (2024). Generative AI. Business & Information Systems Engineering, 66(1), 111-126.
Jain, H., Padmanabhan, B., Pavlou, P. A., Raghu, T. S. (2021). Editorial for the Special Section on Humans, Algorithms, and Augmented Intelligence: The Future of Work, Organizations, and Society. Information Systems Research, 32(3), 675-687.
Jussupow, E., Benbasat, I., Heinzl, A. (2024). An integrative perspective on algorithm aversion and appreciation in decision-making. MIS Quarterly. Forthcoming.
Schemmer, M., Kuehl, N., Benz, C., Bartos, A., Satzger, G. (2023). Appropriate reliance on AI advice: Conceptualization and the effect of explanations. In Proceedings of the 28th ACM International Conference on Intelligent User Interfaces, 410–422.
Schoeffer, J., Kuehl, N., Machowski, Y. (2022). “There is not enough information”: On the effects of explanations on perceptions of informational fairness and trustworthiness in automated decision-making. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 1616–1628.
von Zahn, M., Feuerriegel, S., Kuehl, N. (2022). The Cost of Fairness in AI: Evidence from E-Commerce. Business & Information Systems Engineering, 64, 335–348.