Call for Participation
March 21–23, 2022, Stanford University
Sponsored by the Association for the Advancement of Artificial Intelligence
- Important Deadlines:
- November 15, 2021 : Submissions due to organizers (please see individual descriptions for extensions)
- December 10, 2021: Notifications of acceptance sent by organizers
- AAAI Spring Symposium Submission Site
COVID-19: The Spring Symposium is expected to be held at Stanford University. However, if the current pandemic makes this impossible, alternate plans for a virtual event will be announced in late 2021 or early 2022.
Most organizers have elected to use the AAAI Spring Symposium EasyChair site for receipt of submissions. If specified in the individual symposium description, please submit your work via the AAAI Spring Symposium EasyChair site. Please be sure to select the appropriate symposium when submitting your work. Interested individuals should submit a paper or abstract by the deadline listed above.
The topics of the nine symposia are:
- AI Engineering: Creating Scalable, Human-Centered and Robust AI Systems
- Artificial Intelligence for Synthetic Biology
- Can We Talk? How to Design Multi-Agent Systems in the Absence of Reliable Communications
- Closing the Assessment Loop: Communicating Proficiency and Intent in Human-Robot Teaming
- Designing Artificial Intelligence for Open Worlds
- Ethical Computing: Metrics for Measuring AI’s Proficiency and Competency for Ethical Reasoning
- How Fair is Fair? Achieving Wellbeing AI
- Machine Learning and Knowledge Engineering for Hybrid Intelligence
- Putting AI in the Critical Loop: Assured Trust and Autonomy in Human-Machine Teams
Symposia generally range from 40-75 participants each. Participation will be open to active participants as well as other interested individuals on a first-come, first-served basis. Each participant will be expected to attend a single symposium. Registration information will be available on the AAAI website in early January 2022.
AI Engineering: Creating Scalable, Human-Centered and Robust AI Systems
While both industry and research communities focus substantial work on AI, the development of new AI technology and implementation of AI systems are two different challenges. Current AI solutions often undergo limited testing in controlled environments and their performance is difficult to replicate, verify, and validate. To improve reliable deployment of AI and enable trust and confidence in AI systems, implementers need access to leading practices, processes, tools, and frameworks. The goal of this symposium is to establish and grow a community of research and practitioners focused on the discipline of AI Engineering, a field that combines the principles of systems engineering, software engineering, computer science, and human-centered design to create AI systems in accordance with human needs for stakeholder outcomes. By sharing lessons learned and practical experiences, we can expand the AI Engineering body of knowledge and progress from advancing individual tools to the development of systems.
The symposium will focus on three AI engineering pillars with the goal of further evolving the state of the art, gathering lessons learned, best practices, workforce development needs, and fostering critical relationships.
- Human-centered AI: systems are designed to work with, and for, people.
- Scalable AI: the ability of algorithms, data, models, and infrastructure to operate at the size, speed, and complexity of needs.
- Robust and Secure AI: systems work as expected when deployed outside of controlled development, laboratory, and test environments.
Topics
We encourage participation on topics that explore pillars individually or at intersections. Examples of relevant submissions include (but are not limited to):
- Beyond Accuracy: Enhanced Model Evaluation Metrics
- Design for Human-Machine Teaming
- Evaluating MLOps Pipelines and Tools
- Budget Constraints in Adversarial Machine Learning
- Broad and Wide Scalability Patterns for AI Systems
- How to Tell if Your Dataset is Sufficient to Solve Your Problem
- Maintaining Value Alignment in AI Systems Operations
- Methods for Creating and Demonstrating Trust in AI Systems
Format
Our symposium will involve a mix of keynote and invited talks, breakout sessions, and panel discussions. We look forward to explorations of what AI engineering can and should entail.
Submissions
We invite contributions including:
- Technical papers documenting novel approaches, practices, and tools that facilitate the development, evaluation, and deployment of AI Systems.
- Case studies and practice papers that highlight failures of AI Systems due to lack of engineering principles or tooling, as well as lessons learned.
- Panels exploring challenges, solutions, and current debates around topics relevant to robust, secure, scalable, and human-centered AI Systems.
Please keep submissions to under eight (8) pages including references and figures. There is no minimum submission length and we encourage exploratory submissions.
Submit by December 15, via the AAAI EasyChair.org site
Organizing Committee
Missy Cummings (Duke University), Rachel Dzombak (CMU SEI), Matthew Gaston (CMU SEI), Karen Myers (SRI International), William Streilein (MIT Lincoln Laboratory)
Contact: Rachel Dzombak (rdzombak@sei.cmu.edu)
For More Information
For more information, please see the supplementary symposium site at https://resources.sei.cmu.edu/news-events/events/aaai/call.cfm
Artificial Intelligence for Synthetic Biology
With the success of AI for Synthetic Biology at prior AAAI Symposia Series, we aim to capitalize on the former discourse and bring together researchers from AI and synthetic biology communities to cultivate a multi-disciplinary research community that can benefit both areas of expertise. For AI researchers, it will be a never before explored novel domain with unique challenges, whereas for the synthetic biology community it will be an opportunity to break the complexity barrier it is facing. Our primary goal remains the same – to begin to connect and build mutually beneficial collaborations between the AI and the synthetic biology communities.
Synthetic biology is the systematic design and engineering of biological systems. Synthetic biology holds the potential for revolutionary advances in medicine, environmental remediation, and many more. Many times the design of synthetic organisms occurs at a low level (e.g., DNA level) in a manual process that becomes unmanageable as the size and complexity of a design grows. This is analogous to writing a computer program in assembly language, which also becomes difficult quickly as the size of the program grows. Many of the emerging techniques and tools in synthetic biology produce large amounts of data. Understanding and processing this data provides more avenues for AI techniques to make a big impact.
Topics:
- Developing AI techniques specifically geared towards SynBio problems
- Research that did or could have had an impact on COVID-19
- Machine-assisted gene circuit design
- Flexible protocol automation
- Assay interpretation and modeling
- Representation and exchange of designs
- Representation and exchange of protocols
- Data driven modeling of biological systems
The symposium will include brief introductions to each domain to ensure it is accessible to attendees with both backgrounds; focus groups looking at some of the open problems and challenges in the intersection of AI and Synthetic Biology; contributed talks; and panel discussions. We plan to highlight research in the AI/SynBio intersection that did or could have had an impact on COVID-19.
Format
We plan to continue the working group format that we used in the 2021 symposia, as well as focus groups looking at some of the open problems and challenges in the intersection of AI and Synthetic Biology. Ideally there will be contributed talks, and panel discussions (potentially including government agencies).
Submissions
Details pending. Please submit via the AAAI EasyChair submission site.
Organizing Committee
Aaron Adler (BBN Technologies, aaronadler@alum.mit.edu), Rajmonda Caceres (MIT Lincoln Laboratory, Rajmonda.Caceres@ll.mit.edu), Mohammed Ali Eslami (Netrias, LLC, meslami@netrias.com), Fusun Yaman (BBN Technologies, fusun.yaman@raytheon.com)
For More Information
For more information, please see the supplementary symposium site at https://www.ai4synbio.org/aaai-spring-2022/
Can We Talk? How to Design Multi-Agent Systems in the Absence of Reliable Communications
Existing technology for multi-agent autonomous systems are unable to solve an important class of real problems. At the root of this problem is the assumption of pervasive, predictable, reliable, and free communications. These assumptions are the basis of the state of the art in many existing approaches to distributed planning and plan execution. Such approaches may not be viable if agents do not know if communication is possible, if changes in the world require unplanned communications, if those communications take time, energy, resources, if an agents’ ability to communicate changes, and in general, if communication comes at the cost of resources needed to achieve other goals.
This symposium aims to identify the challenges and bridge the gaps between theoretical frameworks for multi-agent autonomous system and the challenges imposed by deployment in these environments, in which the usual assumptions do not apply. Our goal is to help identify research avenues that can move the AI community beyond theoretical results for simple domains. We actively solicit research that pushes the boundaries of current multi-agent systems theory, as well as applications that require technology that may not exist today. We also invite challenge problems and modeling and simulation frameworks to enable the research community to test their ideas on such problems.
Topics
- Multi-Agent Collaboration
- Common Operational Picture
- System Engineering for Scalability
- Multi-Agent Operations
- Human-Swarm Interaction
- Lessons Learned and Path Forward
Format
The symposium will combine peer reviewed paper presentations, invited talks, and panel discussions. We plan to have invited speakers describing a mix of ongoing applications of multi-agent systems presenting the challenges described above, as well as state of the art research in multi-agent systems that must operate in the presence of uncertainty and unreliable communications. We plan to leave the schedule flexible until such time as papers, speakers and panelists are further defined.
Submissions
Papers will be reviewed by a PC. Papers should be 6-8 pages in standard AAAI style format; please see https://aaai.org/Press/Author/authorguide.php for templates. Submissions will be made via AAAI’s EasyChair site: https://easychair.org/conferences/?conf=sss22.
Organizing Committee
Jeremy Frank (NASA ARC, Jeremy.d.frank@nasa.gov), N. Cramer (NASA ARC, nicholas.b.cramer@nasa.gov), Amir Rahmani (NASA JPL, amir.rahmani@jpl.nasa.gov), Zac Manchester (CMU, zmanches@andrew.cmu.edu), Alex Shelfyman (Technion, shleyfman.alexander@gmail.com), Roman Bartak (Charles U., bartak@ktiml.mff.cuni.cz) Elaine Stewart (NASA GSFC, elaine.m.stewart@nasa.gov)
For More Information
For more information, please see the supplementary symposium site at https://sites.google.com/view/aaai-2022-spring-symposium/home
Closing the Assessment Loop: Communicating Proficiency and Intent in Human-Robot Teaming
Effective human-agent teaming will require artificial agents or robots to perform proficiency self-assessment (PSA). PSA can be operationally defined as evaluating how well a robot’s capabilities and performance align with a human stakeholder’s intent within a given environment context. Thus, closing the loop between PSA and human-robot teaming requires the agent to conduct ongoing assessment, communicate its proficiency to one or more humans, and perceive the human’s intentions, values, and assessments. PSA will likely include traditional elements of explainability, but will also need to include incomplete or partial assertions of the degree of proficiency, what is impeding success, what is unknown, or what would need to change to enable success. Additionally, PSA must extend beyond post hoc evaluation to include a priori and in situ assertions about success likelihood.
This symposium emphasizes communication: robot-to-human communication of proficiency and human-to-robot communication of intent. For example, how should a robot convey predicted ability on a new task? How should it report performance on a task that was just completed? How should a robot adapt its proficiency criteria based on human intentions and values? How can a human effectively communicate intent? Communities in AI, robotics, human-robot interaction, and cognitive science have addressed related questions, but there are no agreed upon standards for evaluating proficiency, communicating proficiency, representing and communicating intent, and adapting based on PSA. This is a pressing challenge especially for long duration and resilient interactions in challenging environments
Topics
We invite submissions related to a broad set of topics related to proficiency assessment and communication of proficiency and intent. Specific topics include (but are not limited to):
- Proficiency self-assessment and related topics
- Communicating proficiency to humans
- Representing, communicating, and inferring human intent
- Relationships between proficiency self assessment, explainability, fault diagnosis, and other related topics
Format
The format of the symposium will include invited talks, talks by authors of accepted papers, panels, and breakout discussions.
Submissions
Please submit one of the following types of submissions via the AAAI SSS-22 EasyChair site.
- Regular papers (6 pages + references)
- Position papers (2 pages + references)
- Summary of previously published papers (1-2 pages)
Organizing Committee
Holly Yanco (University of Massachusetts Lowell), Aaron Steinfeld (Carnegie Mellon University), Jacob W. Crandall (Brigham Young University), Michael A. Goodrich (Brigham Young University)
For More Information
For more information, please see the supplementary symposium site at https://successmuri.org/workshops/sss22/
Designing Artificial Intelligence for Open Worlds
Open-world learning has taken on new importance in recent years as AI systems continue to be applied and transitioned to real-world settings where unexpected events (‘novelties’) can, and do, occur. An open-learning framework has been defined as one that can ‘deal with both normal in-distribution inputs and undesired out-of-distribution (OOD) inputs.’ Designing AI that can operate in open worlds, including detecting, characterizing and adapting to novelty, is a critical goal on the path to building intelligent systems that can work alongside humans to solve complex problems while being reliable enough to handle the unexpected. We invite contributions that describe novel technical approaches for open-world learning and novelty, theoretical frameworks for understanding open-world learning and novelty (including theories of novelty), empirical studies, implementations (including simulators and experimental testbeds), and lessons learned from current implementations.
Topics
Open-world learning
- Unexpected situations and novelty
- Open-world simulations and gameplaying
- Online reinforcement learning
- Out-of-distribution inputs
- Robustness and antifragility in AI
- Architectures for novelty detection, characterization and adaptation
- Experimental methods and frameworks for evaluating OOD
- Theories of open-world learning and novelty
- Applications
Format
Both days will comprise a mix of invited talks, paper presentations, a panel on open-world learning and novelty, and discussion groups (on the second day). Hybrid attendance will be supported.
Submissions
Please submit via the AAAI SSS-22 EasyChair site. We accept the following types of submissions in AAAI format:
- full papers (6-8 pages + references)
- short papers (2-4 pages + references)
Organizing Committee
Mayank Kejriwal, Co-Chair (University of Southern California), Eric Kildebeck, Co-Chair (University of Texas at Dallas), Abhinav Shrivastava, Co-Chair (University of Maryland), Bharat Bhargava (Purdue University), Carl Vondrick (Columbia University)
Contact: Mayank Kejriwal (kejriwal@isi.edu)
For More Information
For more information, please see the supplementary symposium site.
Ethical Computing: Metrics for Measuring AI’s Proficiency and Competency for Ethical Reasoning
The prolific deployment of Artificial Intelligence (AI) across different applications have introduced novel challenges for AI developers and researchers. AI is permeating decision making for the masses: from self-driving automobiles, to financial loan approval, to military applications. Ethical decisions have largely been made by humans with vested interest in, and close temporal and geographical proximity to the decision points. With AI making decisions, those ethical responsibilities are now being pushed to AI designers who may be far-removed from how, where, and when the ethical dilemma occurs. Such systems may deploy global “ethical” rules with unanticipated or unintended local effects or vice versa.
While explainability is desirable, it is likely not sufficient for creating “ethical AI”, i.e. machines that can make ethical decisions. These systems will require the invention of new evaluation techniques around the AI’s proficiency and competency in its own ethical reasoning. Using traditional software and system testing methods on ethical AI algorithms may not be feasible because what is considered “ethical” often consists of judgements made within situational contexts. The question of what is ethical has been studied for centuries. This symposium invites interdisciplinary methods for characterizing and measuring ethical decisions as applied to ethical AI.
Sample Topics:
- What are the dependencies and requirements for developing metrics of a system’s “ethical proficiency”?
- What is a sufficiently “ethical” system? What are design and measure considerations necessary to engineering an “ethical standard” for AI systems?
- How do ethical principles operationalize?
- How does ethical AI impact Modeling and Simulation of large-scale systems?
- What are methods for authoring rules of behavior for ethical AI? What measures need to be captured and what are the acceptable boundaries of those measures?
- How can we address performance concerns around deployment of ethical AI?
- Measuring for Dangerous Adaptations – How can we measure for this effect on ethical AI? If human behavior is changing in a way that “we” don’t like, how do we realize this before it’s too late?
Submissions
Authors can submit papers of 2–6 pages that will be reviewed by the organizing committee. We welcome prior and work-in progress papers describing new methods that present a challenge or opportunity for developing metrics to evaluate ethical AI. Submissions should be formatted according to the AAAI template and submitted through the AAAI Spring Symposium EasyChair site.
Submissions are due November 15, 2021.
Organizing Committee
Peggy Wu (Peggy.Wu@rtx.com), Shannon Ellsworth (Shannon.Ellsworth@rtx.com), Michael Salpukas, PhD (Michael.R.Salpukas@rtx.com), Joseph Williams, PhD (Joseph.Williams@pnnl.gov), Hsin-Fu “Sinker” Wu (Hsin-fu.Wu@rtx.com), John Basl, PhD (J.Basl@northeastern.edu)
For More Information
For more information, please see the supplementary symposium site at https://sites.google.com/view/aaai-ethicalcomputingapproach/home
How Fair is Fair? Achieving Wellbeing AI
What are the ultimate goals and outcomes of AI? AI has incredible potential to help humans make happy, and also has risks to cause unintentional harms. This symposium aims to combine humanity perspectives with technical AI issues, and discover new success metrics for wellbeing AI, instead of productive AI in exponential growth or economic/financial supremacies.
We call for the AI challenges for new human-AI collaboration, which discuss the desirable human- AI partnerships for providing meaningful solutions to social problems with humanity perspectives. This challenge is inspired by the “AI for social good” movements, which pursue the positive social impacts of using AI, supporting the Sustainable Development Goals (SDGs), a set of seventeen objectives for the world to be more equitable, prosperous, and sustainable. In particular, we will focus on the two perspectives: Wellbeing and Fairness.
First perspective is “Wellbeing.”. We define “Wellbeing AI” as artificial intelligence that aims to promote psychological wellbeing (that is, happiness) and maximize human potential. Our environment escalates stress, provides unlimited caffeine, distributes nutrition-free fast food, and encourages unhealthy sleep behavior. For this issue, wellbeing AI provides a way to understand how our digital experience affects our emotions and our quality of life and how to design a better wellbeing system that puts humans at the center.
Second perspective is “Fairness”. AI has the potential to help humans make fair decisions. But we need to tackle the “bias” problem in AI (and in humans) to achieve fairness. In the recent trend on big data becoming personal, AI technologies to manipulate the cognitive bias inherent in people’s mind have evolved, e.g.: social media, such as Twitter and Facebook, and commercial recommendation systems. “Echo chamber effect” is known to make it easy for people with the same opinion in a community. Recently, there has been a movement to use such cognitive biases also in the political world. Advances in big data and machine learning should not overlook the new threats to enlightenment thought.
Topics
We welcome the technical and philosophical discussions for achieving wellbeing and fairness in the design and implementation of ethics, machine learning software, robotics, and social media (but not limited to). For example, interpretable forecasts, sound social media, helpful robotics, fighting loneliness with AI/VR, promoting good health may be the important scope of our discussions.
Format
The symposium is organized by the invited talks, presentations, and posters and interactive demos.
Submissions
Authors should submit either full papers of up to 8 pages (minimum 6 pages) or extended abstracts of up to 2 pages. Extended abstracts should state your presentation type (short paper (1–2 pages), demonstration, or poster presentation). All submissions should be uploaded to AAAI’s EasyChair site at https://easychair.org/conferences/?conf=sss22, and in addition, email your submissions to aaai2022-hfif@cas.lab.uec.ac.jp by November 15, 2021.
Organizing Committee
Takashi Kido (Teikyo University, Japan, kido.takashi@gmail.com), Keiki Takadama (The University of Electro-Communications, Japan, keiki@inf.uec.ac.jp). For a full list of organizers and program committee members, please refer to the URL below.
For more Information
For more information, please see the supplementary symposium site.
Machine Learning and Knowledge Engineering for Hybrid Intelligence
The AAAI-MAKE 2022 symposium aims to bring together researchers and practitioners from machine learning and knowledge engineering to reflect how combining the two fields can contribute to hybrid intelligence systems.
In such hybrid architectures, agents that deploy different types of AI work together to solve problems where separate approaches are not providing satisfactory outcomes, such as concerning explainability and data efficiency. Explainability is required for augmenting human intelligence in the loop of AI, and data efficiency (learning from small datasets) is required in many domains where data availability is limited. Hybrid approaches that combine machine learning with the use of logic can explain conclusions and increase data efficiency.
Topics
- Machine Learning, Deep Learning, and Neural Networks
- Knowledge Engineering, Representation, and Reasoning
- Hybrid Intelligence and Human-in-the-Loop AI
- Explainable AI
- Commonsense AI
- Hybrid AI
- Enterprise AI
- Conversational AI
- Neuro-symbolic AI
Submissions
The symposium involves presentations of accepted papers, side-tutorial events from industry, (panel) discussions, demonstrations and plenary sessions.
We solicit position/full papers and short papers that can include recent or ongoing research, business cases, application scenarios, and surveys. Use cases, application scenarios, and requirements from the industry would be highly beneficial and most welcome.
- Position/full papers (10 to 16 pages) and short papers (5 to 9 pages) can include recent or ongoing research, business cases, application scenarios, and surveys.
- Industrial side-tutorial event or demonstration proposals (less than 5 pages) should have a focus on business or research related to the symposium topics, excluding undesired extensive product advertising.
- Discussion proposals (1 to 2 pages) should contain a description of the specific topic with a list of questions and a discussion moderator.
Since AAAI-MAKE is a dedicated symposium for combining machine learning and knowledge engineering, the contributions should address hybrid intelligence settings. All submissions must reflect the formatting instructions provided in the Author Kit and be submitted through EasyChair. Accepted papers shall be published on CEUR-WS, an established open-access proceedings site.
Organizing Committee
Hans-Georg Fill (University of Fribourg, Switzerland), Aurona Gerber (University of Pretoria, South Africa), Knut Hinkelmann (FHNW University of Applied Sciences and Arts Northwestern Switzerland), Doug Lenat (Cycorp Inc., Austin, TX, USA), Andreas Martin (FHNW University of Applied Sciences and Arts Northwestern Switzerland), Reinhard Stolle (Argo AI GmbH, München, Germany), Frank van Harmelen (VU University, Amsterdam, Netherlands)
Andreas Martin (primary contact for authors and AAAI) and Knut Hinkelmann will serve as co chairs of the organizing and program committees.
For More Information
For more information, please see the supplementary symposium site.
Putting AI in the Critical Loop: Assured Trust and Autonomy in Human-Machine Teams
There will always be interactions between machines and humans. When the machine has a high level of autonomy and the human-machine relationship is close, there will be underpinning, implicit assumptions about behavior and mutual trust. The performance of the Human-Machine team will be maximized when a partnership is formed that is based on providing mutual benefits. Designing systems that include human-machine partnerships requires an understanding of the rationale of any such relationship, the balance of control, and the nature of autonomy. Essential first steps are to understand the nature of human-machine cooperation, to understand synergy, interdependence, and discord within such systems, and to understand the meaning and nature of “collective intelligence.” The reasons why it can be hard to combine machines and humans, attributable to their distinctively different characteristics and features, are also central to why they have the potential to work so well together, ideally overcoming each other’s weaknesses. Across the widest range of applications, these topics remain persistent as a major concern of system design and development. Intimately related to these topics are the issues of human-machine trust and “assured” performance and operation of these complex systems, the focal topics of this year’s proposed symposium. Recent discussions on trust emphasize that, with regard to human-machine systems, trust is bidirectional and two-sided (as it is in humans); humans need to trust AI technology but future AI technology at least may need to trust human inputs and guidance as well. In the absence of an adequately high level of autonomy that can be relied upon, substantial operator involvement is required, which not only severely limits operational gains, but creates significant new challenges in the areas of human-machine interaction and mixed initiative control. The meaning of assured operation of a human-machine system also needs considerable specification; assurance has been approached historically through design processes by following rigorous safety standards in development, and by demonstrating compliance through system testing, but largely in systems of bounded capability and where human roles were similarly bounded. These intersecting themes of collective intelligence, bidirectional trust, and continual assurance form the challenging and extraordinarily interesting themes of this symposium.
Symposium Objectives and Topics of Interest
- Objective: Gain insight of the key ideas and principles related to Assured Trust and Autonomy, and the Systems Engineering principles to realize assured capabilities
- Trust and Emergence in Autonomous Human-Machine Teams
- Foundations of Autonomy and Collective Intelligence
- Engineering & Testing Methods for Assured Autonomy and Metrics for Trust/Bidirectional Trust Assessment
- Objective: Identify and develop key challenges to be overcome for public deployment of autonomous systems
- Ethics in Deploying Autonomous Human-Machine Teams
- Societal Consequences of Autonomous Human-Machine Teams
- Bidirectional Explainability
- Realtime Autonomy Management
- Resilience in Human-Machine Teams
Format
The Symposium is planned as a 2-1/2-day event, ideally in-person, to be held at Stanford University over the period March 21-23, 2022. The first two days will comprise a series of presentations; half of each day will involve invited speaker talks (60 mins), and half of regular talks (30mins). Our plan is to have a balanced agenda between the two main topics of Trust and Autonomy. The morning of the third day is planned as a panel session, with a mix of specialists addressing the topic of “Alternative Paths to Developing Engineering Solutions for Human-machine Teams that Meet Human Values, Laws, and Ethics,” moderated by Professors Gillespie and Llinas. This topic addresses the highest-level considerations that frame the question of “Assurance” when dealing with Trust and Autonomy, in compliance with the report of the National Security Commission on AI that recommends that AI R&D “ensures that uses of AI and associated data in U.S. government operations comport with U.S. law and values.”
Submissions
We are minimally seeking extended abstracts of two pages related to symposium themes, and ideally seeking full draft papers of 6 to 8 pages; either should list relevant research and accomplishments related to the submitted topic, and relevant citations. Submission will be via the EasyChair system.
Chair
Dr. James Llinas, 322 Bell Hall, University at Buffalo, Buffalo, N.Y., 14260; Phone: (716) 645-3624; Cell (716)863-8320; email: llinas@buffalo.edu
Organizing Committee
James Llinas (University at Buffalo), Ranjeev Mittu (U.S. Naval Research Laboratory), Scott Fouse (Fouse Consulting Services), Anthony Gillespie (University College, London).
For More Information
For more information, please see the supplementary symposium site.