Events
Past Event
Probability Seminar | Eren Kizildag (UIUC)
Department of Mathematics: Probability Seminar
4:00 PM
//
104, Lunt Hall
Details
Title: Statistical-Computational Tradeoffs in Random Optimization Problems
Abstract: Optimization problems with random objective functions are central in computer science, probability, and modern data science. Despite their ubiquity, finding efficient algorithms for solving these problems remains a major challenge. Interestingly, many random optimization problems share a common feature, dubbed as a statistical-computational gap: while the optimal value can be pinpointed non-constructively (through, e.g., probabilistic/information-theoretic tools), all known polynomial-time algorithms find strictly sub-optimal solutions. That is, an optimal solution can only be found through brute force search which is computationally expensive.
In this talk, I will discuss an emerging theoretical framework for understanding the fundamental computational limits of random optimization problems, based on the Overlap Gap Property (OGP). This is an intricate geometrical property that achieves sharp algorithmic lower bounds against the best known polynomial-time algorithms for a wide range of random optimization problems. I will focus on two models to demonstrate the power of the OGP framework: (a) the symmetric binary perceptron, a random constraint satisfaction problem and a simple neural network classifying/storing random patterns, widely studied in computer science, probability, and statistics communities, and (b) the random number partitioning problem as well as its planted counterpart, a classical worst-case NP-hard problem whose average-case variant is closely related to the design of randomized controlled trials. In addition to yielding sharp algorithmic lower bounds, our techniques also give rise to new toolkits for the study of statistical-computational tradeoffs in other models, including the online setting.
Time
Tuesday, November 5, 2024 at 4:00 PM - 5:00 PM
Location
104, Lunt Hall Map
Contact
Calendar
Department of Mathematics: Probability Seminar
Biological Systems that Learn
NSF-Simons National Institute for Theory and Mathematics in Biology
9:00 AM
Details
Many biological systems have evolved to embody solutions to complex inverse problems to produce desired outputs or functions; others have evolved strategies to learn solutions to complex inverse problems on much shorter (e.g. physiological or developmental) time scales. Another class of systems that solves complex inverse problems for desired outputs is artificial neural networks. A critical distinction between the biological systems and neural networks is that the former are not associated with processors that carry out algorithms such as gradient descent to solve the inverse problems. They must solve them using local rules that only approximate gradient descent. Thus a key challenge is to understand how biological systems encode local rules for experience-dependent modification in physical hardware to implement robust solutions to complex
inverse problems. A similar challenge is faced by a growing community of researchers interested in developing physical systems that can solve inverse problems on their own. Despite the differences between physical/biological systems that solve inverse problems via local rules on one hand, and neural networks that solve them using global algorithms like gradient descent on the other hand, each has the potential to inform the other. For example, the insight that overparameterization is important for obtaining good solutions generalizes from neural networks to physical/biological learning systems.
Examples of biological systems that learn at different scales include (1) biological filament networks such as the actin cortex, collagen extracellular matrix and fibrin blood clots, which maintain rigidity homeostasis as an output under constantly varying and often extreme stresses as inputs. (2) Epithelial tissues during various stages of development, which can undergo large shape changes and controlled cellular flows as desired outputs. (3) Immune systems, which constantly adapt to bind to invading pathogens as desired outputs. (4) Ecological systems, in which species can change their interactions (eg. learn to consume new species) in order to bolster their population as an output.
The goal of this workshop is to discover new core principles and mathematical tools/approaches shared across physical learning systems, biological learning systems and neural networks that will inform deeper understanding and future discovery in all 3 fields. To this end, we will bring together researchers interested in viewing biological problems through the lens of inverse problems, researchers working on physical learning, and researchers studying neural networks for a week of intensive discussion and cross-fertilization.
We envision a unique format for this workshop, focused on framing and discussing open questions rather than on recitations of recent results. We will ask a subset of participants to present pedagogical overviews of key topics to help members of disparate communities establish common intellectual ground for discussion.
Time
Monday, January 6, 2025 at 9:00 AM - 5:00 PM
Contact
Calendar
NSF-Simons National Institute for Theory and Mathematics in Biology
Biological Systems that Learn
NSF-Simons National Institute for Theory and Mathematics in Biology
9:00 AM
Details
Many biological systems have evolved to embody solutions to complex inverse problems to produce desired outputs or functions; others have evolved strategies to learn solutions to complex inverse problems on much shorter (e.g. physiological or developmental) time scales. Another class of systems that solves complex inverse problems for desired outputs is artificial neural networks. A critical distinction between the biological systems and neural networks is that the former are not associated with processors that carry out algorithms such as gradient descent to solve the inverse problems. They must solve them using local rules that only approximate gradient descent. Thus a key challenge is to understand how biological systems encode local rules for experience-dependent modification in physical hardware to implement robust solutions to complex
inverse problems. A similar challenge is faced by a growing community of researchers interested in developing physical systems that can solve inverse problems on their own. Despite the differences between physical/biological systems that solve inverse problems via local rules on one hand, and neural networks that solve them using global algorithms like gradient descent on the other hand, each has the potential to inform the other. For example, the insight that overparameterization is important for obtaining good solutions generalizes from neural networks to physical/biological learning systems.
Examples of biological systems that learn at different scales include (1) biological filament networks such as the actin cortex, collagen extracellular matrix and fibrin blood clots, which maintain rigidity homeostasis as an output under constantly varying and often extreme stresses as inputs. (2) Epithelial tissues during various stages of development, which can undergo large shape changes and controlled cellular flows as desired outputs. (3) Immune systems, which constantly adapt to bind to invading pathogens as desired outputs. (4) Ecological systems, in which species can change their interactions (eg. learn to consume new species) in order to bolster their population as an output.
The goal of this workshop is to discover new core principles and mathematical tools/approaches shared across physical learning systems, biological learning systems and neural networks that will inform deeper understanding and future discovery in all 3 fields. To this end, we will bring together researchers interested in viewing biological problems through the lens of inverse problems, researchers working on physical learning, and researchers studying neural networks for a week of intensive discussion and cross-fertilization.
We envision a unique format for this workshop, focused on framing and discussing open questions rather than on recitations of recent results. We will ask a subset of participants to present pedagogical overviews of key topics to help members of disparate communities establish common intellectual ground for discussion.
Time
Tuesday, January 7, 2025 at 9:00 AM - 5:00 PM
Contact
Calendar
NSF-Simons National Institute for Theory and Mathematics in Biology
Workshop: Machine Learning and BIG DATA January 7-8, 2025
Northwestern IT Research Computing and Data Services
10:00 AM
//
2124, Mudd Hall ( formerly Seeley G. Mudd Library)
Details
Northwestern is partnering with the Pittsburgh Supercomputing Center to present a Machine Learning and Big Data workshop.
This workshop will focus on topics including big data analytics and machine learning with Spark, and deep learning using Tensorflow.
This will be an IN PERSON event hosted in Mudd Library Small Classroom 2124, there WILL NOT be a direct to desktop option for this event.
Time
Tuesday, January 7, 2025 at 10:00 AM - 4:00 PM
Location
2124, Mudd Hall ( formerly Seeley G. Mudd Library) Map
Contact
Calendar
Northwestern IT Research Computing and Data Services
Biological Systems that Learn
NSF-Simons National Institute for Theory and Mathematics in Biology
9:00 AM
Details
Many biological systems have evolved to embody solutions to complex inverse problems to produce desired outputs or functions; others have evolved strategies to learn solutions to complex inverse problems on much shorter (e.g. physiological or developmental) time scales. Another class of systems that solves complex inverse problems for desired outputs is artificial neural networks. A critical distinction between the biological systems and neural networks is that the former are not associated with processors that carry out algorithms such as gradient descent to solve the inverse problems. They must solve them using local rules that only approximate gradient descent. Thus a key challenge is to understand how biological systems encode local rules for experience-dependent modification in physical hardware to implement robust solutions to complex
inverse problems. A similar challenge is faced by a growing community of researchers interested in developing physical systems that can solve inverse problems on their own. Despite the differences between physical/biological systems that solve inverse problems via local rules on one hand, and neural networks that solve them using global algorithms like gradient descent on the other hand, each has the potential to inform the other. For example, the insight that overparameterization is important for obtaining good solutions generalizes from neural networks to physical/biological learning systems.
Examples of biological systems that learn at different scales include (1) biological filament networks such as the actin cortex, collagen extracellular matrix and fibrin blood clots, which maintain rigidity homeostasis as an output under constantly varying and often extreme stresses as inputs. (2) Epithelial tissues during various stages of development, which can undergo large shape changes and controlled cellular flows as desired outputs. (3) Immune systems, which constantly adapt to bind to invading pathogens as desired outputs. (4) Ecological systems, in which species can change their interactions (eg. learn to consume new species) in order to bolster their population as an output.
The goal of this workshop is to discover new core principles and mathematical tools/approaches shared across physical learning systems, biological learning systems and neural networks that will inform deeper understanding and future discovery in all 3 fields. To this end, we will bring together researchers interested in viewing biological problems through the lens of inverse problems, researchers working on physical learning, and researchers studying neural networks for a week of intensive discussion and cross-fertilization.
We envision a unique format for this workshop, focused on framing and discussing open questions rather than on recitations of recent results. We will ask a subset of participants to present pedagogical overviews of key topics to help members of disparate communities establish common intellectual ground for discussion.
Time
Wednesday, January 8, 2025 at 9:00 AM - 5:00 PM
Contact
Calendar
NSF-Simons National Institute for Theory and Mathematics in Biology
Workshop: Machine Learning and BIG DATA January 7-8, 2025
Northwestern IT Research Computing and Data Services
10:00 AM
//
2124, Mudd Hall ( formerly Seeley G. Mudd Library)
Details
Northwestern is partnering with the Pittsburgh Supercomputing Center to present a Machine Learning and Big Data workshop.
This workshop will focus on topics including big data analytics and machine learning with Spark, and deep learning using Tensorflow.
This will be an IN PERSON event hosted in Mudd Library Small Classroom 2124, there WILL NOT be a direct to desktop option for this event.
Time
Wednesday, January 8, 2025 at 10:00 AM - 4:00 PM
Location
2124, Mudd Hall ( formerly Seeley G. Mudd Library) Map
Contact
Calendar
Northwestern IT Research Computing and Data Services
Biological Systems that Learn
NSF-Simons National Institute for Theory and Mathematics in Biology
9:00 AM
Details
Many biological systems have evolved to embody solutions to complex inverse problems to produce desired outputs or functions; others have evolved strategies to learn solutions to complex inverse problems on much shorter (e.g. physiological or developmental) time scales. Another class of systems that solves complex inverse problems for desired outputs is artificial neural networks. A critical distinction between the biological systems and neural networks is that the former are not associated with processors that carry out algorithms such as gradient descent to solve the inverse problems. They must solve them using local rules that only approximate gradient descent. Thus a key challenge is to understand how biological systems encode local rules for experience-dependent modification in physical hardware to implement robust solutions to complex
inverse problems. A similar challenge is faced by a growing community of researchers interested in developing physical systems that can solve inverse problems on their own. Despite the differences between physical/biological systems that solve inverse problems via local rules on one hand, and neural networks that solve them using global algorithms like gradient descent on the other hand, each has the potential to inform the other. For example, the insight that overparameterization is important for obtaining good solutions generalizes from neural networks to physical/biological learning systems.
Examples of biological systems that learn at different scales include (1) biological filament networks such as the actin cortex, collagen extracellular matrix and fibrin blood clots, which maintain rigidity homeostasis as an output under constantly varying and often extreme stresses as inputs. (2) Epithelial tissues during various stages of development, which can undergo large shape changes and controlled cellular flows as desired outputs. (3) Immune systems, which constantly adapt to bind to invading pathogens as desired outputs. (4) Ecological systems, in which species can change their interactions (eg. learn to consume new species) in order to bolster their population as an output.
The goal of this workshop is to discover new core principles and mathematical tools/approaches shared across physical learning systems, biological learning systems and neural networks that will inform deeper understanding and future discovery in all 3 fields. To this end, we will bring together researchers interested in viewing biological problems through the lens of inverse problems, researchers working on physical learning, and researchers studying neural networks for a week of intensive discussion and cross-fertilization.
We envision a unique format for this workshop, focused on framing and discussing open questions rather than on recitations of recent results. We will ask a subset of participants to present pedagogical overviews of key topics to help members of disparate communities establish common intellectual ground for discussion.
Time
Thursday, January 9, 2025 at 9:00 AM - 5:00 PM
Contact
Calendar
NSF-Simons National Institute for Theory and Mathematics in Biology
Biological Systems that Learn
NSF-Simons National Institute for Theory and Mathematics in Biology
9:00 AM
Details
Many biological systems have evolved to embody solutions to complex inverse problems to produce desired outputs or functions; others have evolved strategies to learn solutions to complex inverse problems on much shorter (e.g. physiological or developmental) time scales. Another class of systems that solves complex inverse problems for desired outputs is artificial neural networks. A critical distinction between the biological systems and neural networks is that the former are not associated with processors that carry out algorithms such as gradient descent to solve the inverse problems. They must solve them using local rules that only approximate gradient descent. Thus a key challenge is to understand how biological systems encode local rules for experience-dependent modification in physical hardware to implement robust solutions to complex
inverse problems. A similar challenge is faced by a growing community of researchers interested in developing physical systems that can solve inverse problems on their own. Despite the differences between physical/biological systems that solve inverse problems via local rules on one hand, and neural networks that solve them using global algorithms like gradient descent on the other hand, each has the potential to inform the other. For example, the insight that overparameterization is important for obtaining good solutions generalizes from neural networks to physical/biological learning systems.
Examples of biological systems that learn at different scales include (1) biological filament networks such as the actin cortex, collagen extracellular matrix and fibrin blood clots, which maintain rigidity homeostasis as an output under constantly varying and often extreme stresses as inputs. (2) Epithelial tissues during various stages of development, which can undergo large shape changes and controlled cellular flows as desired outputs. (3) Immune systems, which constantly adapt to bind to invading pathogens as desired outputs. (4) Ecological systems, in which species can change their interactions (eg. learn to consume new species) in order to bolster their population as an output.
The goal of this workshop is to discover new core principles and mathematical tools/approaches shared across physical learning systems, biological learning systems and neural networks that will inform deeper understanding and future discovery in all 3 fields. To this end, we will bring together researchers interested in viewing biological problems through the lens of inverse problems, researchers working on physical learning, and researchers studying neural networks for a week of intensive discussion and cross-fertilization.
We envision a unique format for this workshop, focused on framing and discussing open questions rather than on recitations of recent results. We will ask a subset of participants to present pedagogical overviews of key topics to help members of disparate communities establish common intellectual ground for discussion.
Time
Friday, January 10, 2025 at 9:00 AM - 5:00 PM
Contact
Calendar
NSF-Simons National Institute for Theory and Mathematics in Biology
IPR Colloq.: B. O'Sullivan (Univ. College Cork) - AI Policymaking: A Tale of Two Domains
Institute For Policy Research
12:00 PM
//
Chambers Hall
Details
"AI Policymaking: A Tale of Two Domains"
By Barry O'Sullivan, visiting professor in computer science, McCormick School of Engineering
O'Sullivan is a professor at the School of Computer Science & IT at University College Cork, a fellow and past president of the European AI Association, and an executive council member of the Association for the Advancement of Artificial Intelligence.
Abstract: Over the last several years, there has been considerable focus on the ethical concerns that arise due to the deployment of artificial intelligence (AI) systems. A variety of policy approaches have emerged internationally in this respect. The European Union (EU) has taken the approach to introduce a specific regulation for AI, the EU AI Act. In this talk Barry will present a perspective on the origins of the EU’s approach to AI ethics, its notion of Trustworthy AI, and the development of the EU AI Act. However, most international AI governance efforts have focused on the civilian domain only with little consensus in relation to AI applied in military, intelligence, and national security settings. Barry will present his experiences of developing a code of conduct for the use of AI in the military domain, and specifically the outcomes to date and current state of a longstanding Track II diplomacy dialogue between U.S., Chinese, and international experts. He will contrast the differences in developing governance mechanisms for AI in these two contrasting domains and highlight some opportunities and challenges in moving things forward on an international level.
This event is part of the Fay Lomax Cook Winter 2025 Colloquium Series, where our researchers from around the University share their latest policy-relevant research.
Please note all colloquia this quarter will be held in-person only.
Time
Monday, January 13, 2025 at 12:00 PM - 1:00 PM
Location
Chambers Hall Map
Contact
Calendar
Institute For Policy Research
Research-In-Progress: Jordon Shivers
NSF-Simons National Institute for Theory and Mathematics in Biology
3:00 PM
//
Suite 4010
Details
Title: Inferring thermodynamic limits on non-equilibrium membrane morphologies
Members of the NITMB community are invited to join us for Research-In-Progress meetings, an informal venue for members of the NITMB to discuss ongoing and/or planned research.
Jordon Shivers is a Schmidt AI in Science Postdoctoral Fellow at the University of Chicago, where he is jointly mentored by Suri Vaikuntanathan and Aaron Dinner. Shivers is broadly interested in the physics and engineering of active soft and biological matter, and tackles problems in this area using a combination of computer simulations, theoretical soft matter physics, and machine learning techniques, often in collaboration with experimental groups.
Learn more about Jordon Shivers’ research and engage in discussion with the NITMB community. Research-In-Progress talks take place on Wednesdays at 3pm at the NITMB office (875 N Michigan Ave., Suite 4010). Snacks and coffee will follow.
Time
Wednesday, January 15, 2025 at 3:00 PM - 4:00 PM
Location
Suite 4010
Contact
Calendar
NSF-Simons National Institute for Theory and Mathematics in Biology
Buffett Symposium on AI & Geopolitics
Buffett Institute for Global Affairs
All Day
//
Buffett Reading Room, 2nd Floor, 720 University Place
Details
AI may soon become a key driver of geopolitical competition, with countries vying for technological supremacy and economic dominance. The potential of AI technologies to revolutionize industries, enhance military capabilities and shape societal norms has far-reaching implications for the disruption of traditional geopolitical balances, particularly as AI development outpaces the ability of policymakers to establish comprehensive governance frameworks.
What are the geopolitical risks and opportunities associated with AI development? What strategies are being developed to prevent the misuse of AI? How can states promote responsible and ethical AI development to shape the future of AI in a way that benefits humanity?
Join us for our winter quarter Buffett Symposium on AI and Geopolitics convening leading strategists, researchers and policymakers to discuss the transformative opportunities and profound challenges that AI poses in geopolitics. These leaders will offer insights on the increasing influence of AI technologies on global power dynamics, national security, economic development, international relations and more.
Co-sponsored by the Northwestern Security & AI Lab (NSAIL) and University College Cork.
Time
Thursday, January 16, 2025
Location
Buffett Reading Room, 2nd Floor, 720 University Place Map
Contact
Calendar
Buffett Institute for Global Affairs
NITMB Seminar Series - Sara Solla
NSF-Simons National Institute for Theory and Mathematics in Biology
10:00 AM
//
Suite 4010
Details
Sara Solla is a Professor of Physics and Physiology at Northwestern University. Solla’s research interests lie in the application of statistical mechanics to the analysis of complex systems. Her research has led her to the study of neural networks, which are theoretical models that incorporate "fuzzy logic" and are thought to be in some aspects analogous to the way the human brain stores and processes information. She has used spin-glass models (originally developed to explain magnetism in amorphous materials) to describe associative memory, worked on a statistical description of supervised learning, investigated the emergence of generalization abilities in adaptive systems, and studied the dynamics of incremental learning algorithms.
Learn more about Sara Solla's research
The NSF-Simons National Institute for Theory and Mathematics in Biology Seminar Series aims to bring together a mix of mathematicians and biologists to foster discussion and collaboration between the two fields. The seminar series will take place on Fridays from 10am - 11am at the NITMB offices in the John Hancock Center in downtown Chicago. There will be both an in-person and virtual component.
Time
Friday, January 17, 2025 at 10:00 AM - 11:00 AM
Location
Suite 4010
Contact
Calendar
NSF-Simons National Institute for Theory and Mathematics in Biology
Research-In-Progress: Efe Gökmen
NSF-Simons National Institute for Theory and Mathematics in Biology
3:00 PM
//
Suite 3500
Details
Members of the NITMB community are invited to join us for Research-In-Progress meetings, an informal venue for members of the NITMB to discuss ongoing and/or planned research. Efe Gökmen is an NITMB Fellow. Gökmen’s expertise lies at the crossroads of machine learning, statistical physics, and information theory. Learn more about Efe Gökmen’s research and engage in discussion with the NITMB community. Research-In-Progress talks take place on Wednesdays at 3pm at the NITMB office (875 N Michigan Ave., Suite 4010). Snacks and coffee will follow.
Time
Wednesday, January 22, 2025 at 3:00 PM - 4:00 PM
Location
Suite 3500
Contact
Calendar
NSF-Simons National Institute for Theory and Mathematics in Biology
MRSEC Seminar: Scientific discovery through physics-aware agentic AI that connects scales, disciplines, and modalities
NU Materials Research Science and Engineering Center
3:00 PM
Details
Abstract: For centuries, researchers have sought out ways to connect disparate areas of knowledge. With the advent of Artificial Intelligence (AI), we can now rigorously explore relationships that span across distinct areas – such as, mechanics and biology, or science and art – to deepen our understanding, to accelerate innovation, and to drive scientific discovery. However, many existing AI methods have limitations when it comes to physical intuition, and often hallucinate. To address these challenges, we present research that blurs the boundary between physics-based and data-driven modeling through a series of physics-inspired multimodal graph-based generative AI models, set forth in a hierarchical multi-agent mixture-of-experts framework. The design of these models follows a biologically inspired approach where we re-use neural structures and dynamically arrange them in different patterns and utility, implementing a manifestation of the universality-diversity-principle that forms a powerful principle in bioinspired materials. This new generation of models is applied to the analysis and design of materials, specifically to mimic and improve upon biological materials. Applied specifically to protein engineering, the talk will cover case studies covering distinct scales, from silk, to collagen, to biomineralized materials, as well as applications to medicine, food and agriculture where materials design is critical to achieve performance targets.
Bio: Markus J. Buehler is the McAfee Professor of Engineering at MIT. Professor Buehler pursues new modeling, design and manufacturing approaches for advanced bio-inspired materials that offer greater resilience and a wide range of controllable properties from the nano- to the macroscale. He received many distinguished awards, including the Feynman Prize, the ASME Drucker Medal, the J.R. Rice Medal, and many others. Buehler is a member of the National Academy of Engineering.
Time
Wednesday, January 22, 2025 at 3:00 PM - 4:00 PM
Contact
Calendar
NU Materials Research Science and Engineering Center
Topics In Research Computing: National GPU and Computing Resources(Virtual)
Northwestern IT Research Computing and Data Services
12:00 PM
Details
Are you looking to teach a class where your students need access to GPUs? Do you need more GPUs than are available on Quest to train new AI models? Do you have specific hardware needs or are you hoping to deploy a science gateway? ACCESS-CI and NAIRR (National AI Research Resource) are two federally-funded computing resources available to support these and other computing needs. This workshop will introduce you to these platforms and cover how to set up accounts and request resources. This workshop will not be recorded.
Prerequisites: It may be helpful to have an account on the ACCESS-CI platform. You can sign up for an account on the ACCESS CI Registration Page if interested.
Time
Thursday, January 23, 2025 at 12:00 PM - 1:00 PM
Contact
Calendar
Northwestern IT Research Computing and Data Services
Statistics and Data Science Seminar: "The Role of AI in Scientific Discovery: Opportunities and Limitations"
Department of Statistics and Data Science
11:00 AM
//
Ruan Conference Room – lower level, Chambers Hall
Details
The Role of AI in Scientific Discovery: Opportunities and Limitations
Xiangliang Zhang, Leonard C. Bettex Collegiate Professor of Computer Science, University of Notre Dame
Abstract: Artificial Intelligence (AI) is reshaping the landscape of scientific discovery, enabling breakthroughs across diverse fields. However, when these AI tools are applied to scientific problems, gaps and mismatches often arise. The inherent uncertainty in scientific phenomena, coupled with issues like data quality, biases, and interpretability, poses significant challenges. This talk will discuss the transformative potential of AI in scientific discovery, focusing on its applications in predictive modeling, generative tasks, optimization strategies, and literature analysis. Examples will include AI models ranging from traditional neural networks to large language models (LLMs). At the same time, their limitations will be critically examined, calling for collaboration between the AI and scientific communities to address these challenges and unlock AI’s full potential in advancing scientific discovery.
Time
Friday, January 24, 2025 at 11:00 AM - 12:00 PM
Location
Ruan Conference Room – lower level, Chambers Hall Map
Contact
Calendar
Department of Statistics and Data Science