• Skip to main content
  • Skip to primary sidebar
AAAI

AAAI

Association for the Advancement of Artificial Intelligence

    • AAAI

      AAAI

      Association for the Advancement of Artificial Intelligence

  • About AAAIAbout AAAI
    • AAAI Officers and Committees
    • AAAI Staff
    • Bylaws of AAAI
    • AAAI Awards
      • Fellows Program
      • Classic Paper Award
      • Dissertation Award
      • Distinguished Service Award
      • Allen Newell Award
      • Outstanding Paper Award
      • Award for Artificial Intelligence for the Benefit of Humanity
      • Feigenbaum Prize
      • Patrick Henry Winston Outstanding Educator Award
      • Engelmore Award
      • AAAI ISEF Awards
      • Senior Member Status
      • Conference Awards
    • AAAI Resources
    • AAAI Mailing Lists
    • Past AAAI Presidential Addresses
    • Presidential Panel on Long-Term AI Futures
    • Past AAAI Policy Reports
      • A Report to ARPA on Twenty-First Century Intelligent Systems
      • The Role of Intelligent Systems in the National Information Infrastructure
    • AAAI Logos
    • News
  • aaai-icon_ethics-diversity-line-yellowEthics & Diversity
  • Conference talk bubbleConferences & Symposia
    • AAAI Conference
    • AIES AAAI/ACM
    • AIIDE
    • IAAI
    • ICWSM
    • HCOMP
    • Spring Symposia
    • Summer Symposia
    • Fall Symposia
    • Code of Conduct for Conferences and Events
  • PublicationsPublications
    • AAAI Press
    • AI Magazine
    • Conference Proceedings
    • AAAI Publication Policies & Guidelines
    • Request to Reproduce Copyrighted Materials
  • aaai-icon_ai-magazine-line-yellowAI Magazine
    • Issues and Articles
    • Author Guidelines
    • Editorial Focus
  • MembershipMembership
    • Member Login
    • Developing Country List
    • AAAI Chapter Program

  • Career CenterCareer Center
  • aaai-icon_ai-topics-line-yellowAITopics
  • aaai-icon_contact-line-yellowContact

Home / Conferences / AAAI Spring Symposia / AAAI 2014 Symposia /

AAAI 2014 Symposia

January 30, 2023

AAAI 2014 Spring Symposia

March 24–26, 2014

Sponsored by the Association for the Advancement of Artificial Intelligence
In cooperation with the Stanford University Computer Science Department

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University’s Department of Computer Science, is pleased to present the AAAI 2014 Spring Symposium Series, to be held Monday through Wednesday, March 24–26. The titles of the eight symposia are as follows:

  • Applied Computational Game Theory
  • Big Data Becomes Personal: Knowledge into Meaning
  • Formal Verification and Modeling in Human-Machine Systems
  • Implementing Selves with Safe Motivational Systems and Self-Improvement
  • The Intersection of Robust Intelligence and Trust in Autonomous Systems
  • Knowledge Representation and Reasoning in Robotics
  • Qualitative Representations for Robots
  • Social Hacking and Cognitive Security on the Internet and New Media


Applied Computational Game Theory

There is a large and growing interest in applying game theory to security, health, and sustainability; which are grand challenges for engineering in the 21st century. In fact, the last five years have seen game theory based systems developed and applied to real-world domains. For example, software assistants have been developed for randomized patrol planning for the Los Angeles International Airport police, the Federal Air Marshal Service, United States Coast Guard and the Los Angeles Sheriff’s Department. Also game theory has been utilized for decentralized control, operation and management of future generation electricity.

While there has been significant progress, there still exist many major challenges facing the design of effective approaches to deal with the difficulties in security, health and sustainability. Addressing these challenges requires collaboration from different communities including artificial intelligence, game theory, operations research, social science, and psychology. This symposium is structured to encourage a lively exchange of ideas between members from these communities.

Topics

Topics of interest include but are not limited to the following:

  • Game theory foundations
  • Algorithms for scaling to very large games
  • Human factors and intelligent user interfaces
  • Agent/human interaction for preference elicitation and optimization
  • Game-theoretic treatment of disease and contagion models
  • Distributed control in energy systems
  • Risk Analysis

Organizing Committee

Manish Jain (University of Southern California, manishja@usc.edu), Albert Xin Jiang (University of Southern California, albertjiang@gmail.com), Bo An (Nanyang Technological University, boan@ntu.edu.sg), Samarth Swarup (Virginia Tech, swarup@vbi.vt.edu)

For More Information

For more information, please consult the supplemental symposium website.


Big Data Becomes Personal: Knowledge into Meaning — For Better Health, Wellness and Well-Being

One of the most significant shifts in our contemporary world is the trend toward obtaining and analyzing big data in nearly every venue of life. For better health, wellness, and well-being, it is very significant to acquire personal meaningful information from big data. However, the following outstanding challenges should be tackled to make big data become personal: (1) how to quantify our health, wellness, and well-being for generating useful big data that will become meaningful knowledge; (2) how to turn the large volumes of impersonal quantitative data into qualitative information that can impact the quality of life of the individual; and (3) how the quantitative data and qualitative information contribute to improving our health, wellness, and well-being. This symposium seeks to explore the methods and/or methodologies for the above three questions. To explore such technologies of how big data is being made personally usable and meaningful, our proposed symposium tackles the following new and/or important challenges. This symposium will bring together an interdisciplinary group of researchers to discuss possible solutions for our wellness by focusing on AI techniques.

1. How to quantify our health, wellness, and well-being
Some of the issues related to the above question include the self-tracking technology and quantified-self approach. For the recent self-tracking technologies, the monitoring personal health conditions such as sleep or daily living give us new possibilities for creating new values in our future personal wellness. For the quantified-self approach, an acquisition of the weight or calorie as the result of diet or excise contributes to improving personal wellness from big data.

2. How to turn the large volumes of quantitative data into qualitative information on our health, wellness, and well-being
One of the issues related to the above question is to understand ourselves (for example, the percentage of being cancer or the personal characteristics such as conservatism) by comparing the personal medical data, personal genome, or personal brain data with other person’s data. For this issue, cognitive and physiological modeling of human is useful. Recent scientific advances from brain science, genetics, and psychologies would give us new personal findings.

3. How the quantitative data and qualitative information improve our health, wellness, and well-being
Not only collecting self-tracking data and turning them into meaningful information, it is very important to clarify how such data or information contribute to improving our health, wellness, and well-being. One of the examples includes a behavior change caused by big data, that is, a behavior change when big data becomes more personal. As another example, recent social media networks such as Facebook which potentially enhance our personal values provide us big data (for example, sharing pictures) that contributes to increasing our happiness and well-being senses.

4. Applications, platforms and field studies
Finally, several wellness service applications and field study are welcome to emphasize the area of our interests. Effective personal wellness applications are significant challenges in terms of personalization, biomedical data mining, and accessibility.

Topics

The following topics include, but are not limited to the scope of our interests:

1. How to quantify our health, wellness, and well-being
Sleep monitoring, diet monitoring, vital data, diabetes monitoring, running/sport calorie monitoring, personal genome, personal medicine, new type of self-tracking device, portable mobile tools, health data collection, quantified self tools, experiments

2. How to turn the large volumes of quantitative data into qualitative information on our health, wellness, and well-being
Discovery informatics technologies; data mining and knowledge modeling for wellness, collective intelligence/ knowledge, life log analysis (for example, vital data analyses, twitter-based analysis), data visualization, human computation, and others), biomedical informatics, personal medicine. Cognitive and biomedical modeling; brain science, brain interface, physiological modeling, biomedical informatics, systems biology, network analysis, mathematical modeling, disease dynamics, personal genome, gene networks, genetics and lifestyle with microbiome, health/disease risk.

3. How the quantitative data and qualitative information improve our health, wellness, and well-being
Social data analyses and social relation design, mood analyses, human computer interaction, health care communication system, natural language dialog system, personal behavior discovery, kansei, zone and creativity ,compassion, calming technology, kansei engineering, gamification.

4. Applications, platforms and field studies
Medical recommendation system, care support system for aged person, web service for personal wellness, games for health and happiness, life log applications, disease improvement experiment (for example, metabolic syndrome, diabetes), sleep improvement experiment, healthcare /disable support system, community computing platform.

Format

The symposium is organized by the invited talks, presentations, and posters and interactive demos.

Contact

Takashi Kido (Ph.D, Computer Science)
Riken Genesis. Co., Ltd.
Toppan Buioding Higashikan 3F
Taito-Ku, Taito, 1-5-1, Tokyo, 110-8560, Japan
Telephone: +81-3-3839-8043
Fax: +81-3-3835-7154
E-mail: kido.takashi@gmail.com

Symposium Organizing Cochairs

  • Takashi Kido (Riken Genesis Co., Ltd., Japan), kido.takashi@gmail.com.
  • Keiki Takadama (The University of Electro-Communications, Japan), keiki@inf.uec.ac.jp.

For More Information

For more information, please consult the supplemental symposium website.


Formal Verification and Modeling in Human-Machine Systems

The goal of the symposium is to bring together the fields of formal verification, cognitive modeling, and task analysis to study the design and verification of real human-machine systems. Recent papers in each of these communities discuss modeling challenges and the application of basic formal verification in human-machine interaction; however, there is little communication between researchers in these different areas and there are many open questions that require cross-disciplinary collaboration. The symposium is to bring together experts from many communities in an environment where it is possible to explore key research areas, common solutions, near-term research problems, and advantages in combining the best of the different communities.

Topics

What model classes, methodologies, and constructs are appropriate for modeling human and machine activities in a way that is amenable to formal verification? Examples include the following

  • Programming languages
  • State Machines
  • Activity models (for example Brahms)
  • Cognitive models (SOAR, ACT-R, DIARC, and others)
  • Task analyses-based models (GDTA, CWA, and others)
  • Probabilistic models
  • Behavioral game theory

What levels of abstraction are appropriate for such modeling, and what information is lost in using abstraction?

What are the contexts, if any, for which the trade offs between authority between humans, autonomy, and model-based reasoning can be specified?

What is the impact on design for including explicit (meta-) reasoning models in the human-machine interaction loop?

What types of model-checkers are appropriate, and what other lessons from formal verification apply to human-machine systems?

What are the ethical considerations of using verified models to allocate responsibility and authority between humans and machines?

What organizational structures are appropriate for human-machine collaborative work?

  • Master-slave
  • Teammates
  • Principal-agent

How can dynamic models evolve in the presence of learning agents, both human and machine, and in the presence of inaccurate mental models.

Invited Speakers

  • Amy Pritchett (Georgia Tech, USA)
  • Philippe Palanque (IRIT, University Paul Sabatier, France)
  • Christian Lebiere (Carnegie Mellon University, USA)

Organizing Committee

Ellen Bass, Drexel University, USA, Michael Goodrich, Brigham Young University, USA, Eric Mercer, Brigham Young University, USA, Neha Rungta, NASA Ames Research Center, USA

For More Information

For more information, please consult the supplemental symposium website.


Implementing Selves with Safe Motivational Systems and Self-Improvement

While artificial (general) intelligence most often focuses on tools/systems for collecting knowledge and solving problems or achieving goals rather than self-reflecting entities, this implementation-oriented symposium will focus instead on guided self-creation and improvement — particularly as a method of achieving human-level intelligence in machines through iterative improvement (“seed AI”).

In I Am a Strange Loop, Douglas Hofstadter argues that the key to understanding selves is the “strange loop”, a complex feedback network inhabiting our brains and, arguably, constituting our minds. Further, humans have both conscious and unconscious minds, attention, emotions, partial self-reflection, a moral sense and many other aspects that are rarely addressed — yet seem critical for the creation of a safe self-sufficient autonomous system. This symposium will focus on the integration of these components into a coherent self-improving self. Ideally, the ultimate end result will be a successful entity with extensive self-knowledge and a safe or moral/ethical motivational system that functions with discrimination to promote cooperation with and contribution to community via iterative improvement of self, tools and theoretical constructs of relational dynamics and resource utilization/allocation/sharing.

Topics

Potential topics include (but aren’t limited to) the following:

  • Integrative architectures w/explicit motivations implementing “self” as
  • Operating system with “plug-ins”
  • Society of mind (minsky)/economy of idiots (baum)
  • Global workspace/consciousness (baars/franklin)
  • Authorship (dennett/wegner)
  • Safe/moral/ethical motivational systems
  • Value sets versus goal hierarchies
  • “safe”/moral values/goal content
  • Evaluation schemes
  • Reflection
  • Self-examination
  • Self-modeling and self-knowledge
  • Goal-based self-evaluation for self-improvement
  • Attention and emotions
  • As knowledge/rules of thumb/”actionable qualia”
  • As (un)helpful biases (and intelligent improvement)
  • As evaluation/enforcement mechanisms
  • Integrating different knowledge/action representation schemes
  • Coordination and translation between schemes
  • Analyzing trade-offs/knowing when to switch
  • Self-improvement
  • Via automated tool/method incorporation and theory-inductive heuristics
  • Via learning/knowledge incorporation
  • Discovery (refactoring, modularization, encapsulation and scale-invariance)

While solutions need to be grounded and extensible, approaches starting with some initial structure rather than a tabula rasa with the lowest level bootstrapping approaches or first causes explanations (except where they are fully extended to initial structures and/or used to justify such structures) are preferred. While autopoiesis and “functional consciousness” are obviously key topics, phenomenal consciousness is, preferably, off-topic.

Primary Contact Mark Waser (Digital Wisdom Institute, MWaser@DigitalWisdomInstitute.org)

For More Information

For more information, please consult the supplemental symposium website.


The Intersection of Robust Intelligence and Trust in Autonomous Systems

This AAAI symposium will explore the intersection of robust intelligence (RI) and trust across multiple contexts among autonomous hybrid systems (where hybrids are arbitrary combinations of humans, machines and robots). We seek methods for structuring teams or networks that increase robust intelligence and engender trust among a system of agents. But how can we determine the questions critical to the static and dynamic aspects of behavior and metrics of agent performance?

To better manage RI with AI to promote trust in autonomous agents and teams, our interest is in the theory, mathematics, computational models, and field applications at the intersection of RI and trust, not only in team-multitasking effectiveness or in modeling RI networks, but in the efficiency and trust engendered among interactants.

We seek to understand the intersection of RI and trust for humans interacting with systems (for example, teams, firms, networks), to use this information with AI to model RI and trust, and to predict outcomes from interactions among hybrids (for example, multitasking operations).

Systems that learn, adapt, and apply experience to problems may be better suited to respond to novel environmental challenges. One could argue that such systems are “robust” to the prospect of a dynamic and occasionally unpredictable world. We expect systems that exhibit robustness would afford a greater degree of trust in hybrids that interact with the system. Robustness through learning, adaptation and structure is determined by predicting and modeling the interactions of autonomous hybrid enterprises. How can we use this data to develop models indicative of normal/abnormal operations in a given context? We hypothesize such models improve enterprise intelligence by allowing autonomous entities to continually adapt within normalcy bounds, leading to greater reliability and trust.

The focus of this symposium is how robust intelligence impacts trust in the system and how trust in the system impacts robustness. We will explore approaches to RI and trust including (for example): intelligent networks, intelligent agents, and intelligent multitasking by hybrids.

Organizing Committee

Jennifer Burke (Boeing, jennifer.l.burke2@boeing.com), Alan Wagner (Georgia Tech Research Institute, Alan.Wagner@gtri.gatech.edu), Don Sofge (Naval Research Laboratory, don.sofge@nrl.navy.mil), W.F. Lawless (Paine College, wlawless@paine.edu)

For More Information

For more information, please consult the supplemental symposium website.


Knowledge Representation and Reasoning in Robotics

Robots and agents deployed in homes, offices and other complex domains are faced with the formidable challenge of representing, revising and reasoning with incomplete domain knowledge acquired from sensor inputs and human feedback. Although many algorithms have been developed for qualitatively or quantitatively representing and reasoning with knowledge, the research community is fragmented, with separate vocabularies that are increasingly making it difficult for these researchers to communicate with each other. For instance, the rich body of research in knowledge representation using logical reasoning paradigms provides appealing commonsense reasoning capabilities, but does not support probabilistic modeling of the considerable uncertainty in sensing and acting on robots. In parallel, robotics researchers are developing sophisticated probabilistic algorithms that elegantly model the uncertainty in sensing and acting on robots, but it is difficult to use such algorithms to represent and reason with commonsense knowledge. Furthermore, algorithms developed to combine logical and probabilistic reasoning do not provide the desired expressiveness for commonsense reasoning and/or do not fully support the uncertainty modeling capabilities required in robotics.

The objective of this symposium is to promote a deeper understanding of recent breakthroughs and challenges in the logical reasoning and probabilistic reasoning communities. We seek to encourage collaborative efforts towards building knowledge representation and reasoning architectures that support qualitative and quantitative descriptions of knowledge and uncertainty.

Topics and Format

The symposium will consist of paper and poster presentations, invited talks, breakout sessions, and demos. Topics of interest include the following:

  • Knowledge acquisition and representation.
  • Combining symbolic and probabilistic representations.
  • Reasoning about uncertainty.
  • Reasoning with incomplete knowledge.
  • Interactive and cooperative decision-making.
  • Learning and symbol grounding.
  • Commonsense reasoning.

Some of the presentations and talks will describe efforts that integrate, or motivate an integration of, logic-based and probabilistic algorithms for knowledge representation and/or commonsense reasoning on one or more robots or agents in different application domains. Other papers will ground these topics in research areas such as robot vision, human-robot (and multirobot) collaboration, and robot planning. This symposium will also share some sessions and invited speakers with the parallel AAAI symposium on Qualitative Representation for Robots

Organizing Committee

Mohan Sridharan (Texas Tech University, USA, mohan.sridharan@ttu.edu), Fangkai Yang (The University of Texas at Austin, USA, fkyang@cs.utexas.edu), Subramanian Ramamoorthy (The University of Edinburgh, UK, s.ramamoorthy@ed.ac.uk), Volkan Patoglu (Sabanci University, Turkey, vpatoglu@sabanciuniv.edu), Esra Erdem (Sabanci University, Turkey, esraerdem@sabanciuniv.edu)

For More Information

For more information, please consult the supplemental symposium website.

(If your research is primarily in qualitative representations for robots, please consider submitting your paper to the Qualitative Representations for Robots AAAI Spring Symposium.)


Qualitative Representations for Robots

The fields of AI and robotics have many approaches to representation and reasoning. This symposium focuses on one approach, which has been growing in popularity in recent years: qualitative representations. Such representations abstract away from the quantitative features that underlie many physically situated systems, providing compact, structured representations, which omit (unnecessary) detail. Qualitative representations have many advantages, including naturally encoding semantics for many systems, being accessible to humans, providing smaller state spaces for learning, allowing to build robust and complex applications and also suitability for communication. These advantages have seen them being increasingly used in intelligent, physically-grounded systems. This work is being done across many different subfields of AI including knowledge representation and reasoning, planning, learning, and perception. We strongly believe that the time is now right to bring these disparate groups together to share experiences and technical knowledge. We also wish to connect recent robotics work on qualitative representations to the rich history of related ideas in AI.

Topics

This symposium will address topics related to the use of qualitative representations or reasoning on robotics problems (for example, learning, task/motion planning, communication), including qualitative representations of the following:

  • Space
  • Motion
  • Time
  • Uncertainty
  • Action/behavior
  • Appearance
  • Context
  • Categorical or functional knowledge

We particularly encourage contributions that exploit the key features of qualitative approaches to provide new functionality to robots, for example, to exploit coarse background knowledge or to learn from experience over long periods or across large-scale space.

The symposium will include invited talks, presentations on accepted papers, discussion and demonstrations. This event runs in parallel with the AAAI Spring Symposium on Knowledge Representation and Reasoning in Robotics. Due to the overlapping nature of these events, we will have joint sessions and coordinate our activities.

Symposium Chair

Nick Hawes, University of Birmingham (n.a.hawes@cs.bham.ac.uk).

Organizing Committee

Alper Aydemir (NASA Jet Propulsion Laboratory), Chris Burbridge and Lars Kunze (University of Birmingham), Marc Hanheide and Nicola Bellotto (University of Lincoln), Luca Iocchi and Daniele Nardi (“Sapienza” Universita’ di Roma), Patric Jensfelt and John Folkesson (Kungliga Tekniska Högskolan), Michael Karg (Technische Universität München), John D. Kelleher (Dublin Institute of Technology), Alexandra Kirsch (University of Tübingen), Matthew Klenk (Palo Alto Research Center), Kate Lockwood (California State University, Monterey Bay), Fiona McNeill (University of Edinburgh), Andrzej Pronobis (University of Washington), Diedrich Wolter (UniversitätBremen), Jure Zabkar (University of Ljubljana)

For More Information

For more information, please consult the supplemental symposium website.


Social Hacking and Cognitive Security on the Internet and New Media

The Internet and new media (INM) fundamentally alter the landscape of influence and persuasion in three major ways. First, the ability to influence is now democratized, in that any individual or group has the potential to communicate and influence large numbers of others online in a way that would have been prohibitively expensive in the pre-Internet era. It is also now significantly more quantifiable, in that data from the INM can be used to measure the response of crowds to influence efforts and the impact of those operations on the structure of the social graph. Finally, influence is also far more concealable, in that users may be influenced by information provided to them by anonymous strangers, or even in the simple design of an interface.

“Social engineering” in the computer security space typically has been the venue for discussing the hacking of social interaction. However, the scope of this has, for the most part, been limited to only a narrow field of action: one-on-one conversations that garner sensitive information from gullible members of a target organization. A major goal of this symposium is to establish the field of Cognitive Security (CogSec) whose goal is to update and expand this limited concept to meet the modern realities of influence. CogSec is interdisciplinary and draws on fields such as cognitive science, computer science, social science, security, marketing, political campaigning, public policy, and psychology.

This symposium will convene a diverse group of experts relevant to the broad area of “CogSec” that includes the development of methods that (1) detect and analyze cognitive vulnerabilities (that is, susceptibilities to false information) and (2) block efforts that exploit cognitive vulnerabilities to influence collective action at multiple scales.

The goal of the symposium is to bring together fundamental research from academia as well as the public and private sectors and develop an applied engineering methodology. To this end, we encourage paper submissions in areas relevant to developing the following:

  • A Statement of the Field of Cognitive Security
  • Cognitive Vulnerability Analysis and Modeling
  • A Defense Doctrine of Cognitive Security
  • Design Principles for Effective Network Shaping
  • A Code of Ethics of Social Shaping and Social Hacking

Topics

Examples of topic areas of interest are

  • Artificial intelligence
  • Computational social science
  • Anthropology of internet and new media culture
  • Data-driven political campaigning
  • Data-driven marketing and advertising
  • Bot swarms
  • Algorithmic detection of cognitive biases

Primary Contact

Tim Hwang ( tim@pacsocial.com).

Organizing Committee

Rand Waltzman (DARPA, rand.waltzman@darpa.mil), Tim Hwang (Pacific Social Architecting, tim@pacsocial.com), Alex “Sandy” Pentland (MIT Media Lab, sandy@media.mit.edu), Albert-László Barabási (Center for Complex Network Research, Northeastern University, barabasi@gmail.com), Jure Leskovec (Department of Computer Science, Stanford, jure@cs.stanford.edu), Nicco Mele (EchoDitto, John F. Kennedy School of Government, Harvard University, nicco_mele@hks.harvard.edu), Jodee Rich (PeopleBrowsr, JodeeRich@kred.com)

For More Information

For more information, please please consult the supplemental symposium website.


Categories: 2014, Spring Symposium

Primary Sidebar

AAAI Symposia

AAAI Fall Symposia

AAAI Spring Symposia

AAAI Summer Symposia

Educational Advances in AI Symposia (EAAI)

Symposia Technical Reports

AAAI Conferences

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
Cookie SettingsAccept All
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT