Difference between revisions of "AFSecurity Seminar"

From mn/ifi/AFSecurity
Jump to: navigation, search
Line 9: Line 9:
 
13:00h Welcome at IFI
 
13:00h Welcome at IFI
  
13:15h Talk: ''Intelligence Analysis''
+
13:15h Talk: ''Intelligence Analysis: Reflections on the Human – Machine Analytic Enterprise from a Behavioral Computer Science Perspective.''
  
 
14:00h Discussion
 
14:00h Discussion
  
  
'''SPEAKER:''' Tore Pedersen, Defence Intelligence College
+
'''SPEAKER:''' Tore Pedersen (Norwegian Defence Intelligence School)
  
 
'''ABSTRACT:'''
 
'''ABSTRACT:'''
T.B.A.
+
Making judgments automatically, without conscious awareness, also termed ‘heuristic processing’, is particularly adaptive and appropriate in situations where people have extensive previous (tacit) knowledge and experience. However, people tend to employ the same heuristic processing mode also in situations where they have less previous experience. In such situations heuristic processing is likely to lead to a biased judgment, whereas analytic processing with conscious awareness would more likely lead to an unbiased judgment.
 +
This phenomenon may extend also to the machine-based automated knowledge-generation of today and tomorrow: Human biases may unintentionally be imposed on machines through programming of initial algorithms and the biases may sustain in the machines’ automated decision processes. Additionally, machines’ self-adjustment of algorithms in learning processes may, as a result of learning from non-representative data, lead to equally biased output.
 +
Moreover, in the process of making inference-leaps from data to knowledge, such as when claiming causal relation between present events or when predicting what the future might look like, validity in analytic products relies on the ability to apply sound scientific reasoning: Knowledge is bounded by the quality and representativeness of collected data, as well as on the limitations of research designs and the assumptions and restrictions inherent in various (statistical) tests. With today’s increased access to data sources and increased data volumes, adherence to the same (traditional) rigorous scientific standards must nevertheless still be applied, both in collection and analysis.
 +
Thus, in the human – machine analytic enterprise of today and tomorrow, it is important to be aware of the potential threat from bias, as well as the potential threat from non-adherence to scientific reasoning, because both of these phenomena may have implications for the validity of output from human – machine analytic processes.
  
 
'''SPEAKER BIO:'''
 
'''SPEAKER BIO:'''
T.B.A.  
+
Tore Pedersen is Associate Professor and Director of Center for Intelligence Studies at the Norwegian Defence Intelligence School (NORDIS). He is also visiting researcher at the Department of Informatics, University of Oslo, and affiliated Associate Professor at the Department of Psychology, Bjørknes University College. He is currently engaged in empirical research on cognitive aspects of the National Intelligence and Security domain.
 +
 
 +
 
 
{| border="0" cellpadding="1" cellspacing="1" width="100%"
 
{| border="0" cellpadding="1" cellspacing="1" width="100%"
 
|-
 
|-

Revision as of 06:34, 9 June 2017

Intelligence Analysis

DATE:  16 June 2017

LOCATION:  Kristen Nygaards sal (room 5370), Ole Johan Dahl's House.

AGENDA:

13:00h Welcome at IFI

13:15h Talk: Intelligence Analysis: Reflections on the Human – Machine Analytic Enterprise from a Behavioral Computer Science Perspective.

14:00h Discussion


SPEAKER: Tore Pedersen (Norwegian Defence Intelligence School)

ABSTRACT: Making judgments automatically, without conscious awareness, also termed ‘heuristic processing’, is particularly adaptive and appropriate in situations where people have extensive previous (tacit) knowledge and experience. However, people tend to employ the same heuristic processing mode also in situations where they have less previous experience. In such situations heuristic processing is likely to lead to a biased judgment, whereas analytic processing with conscious awareness would more likely lead to an unbiased judgment. This phenomenon may extend also to the machine-based automated knowledge-generation of today and tomorrow: Human biases may unintentionally be imposed on machines through programming of initial algorithms and the biases may sustain in the machines’ automated decision processes. Additionally, machines’ self-adjustment of algorithms in learning processes may, as a result of learning from non-representative data, lead to equally biased output. Moreover, in the process of making inference-leaps from data to knowledge, such as when claiming causal relation between present events or when predicting what the future might look like, validity in analytic products relies on the ability to apply sound scientific reasoning: Knowledge is bounded by the quality and representativeness of collected data, as well as on the limitations of research designs and the assumptions and restrictions inherent in various (statistical) tests. With today’s increased access to data sources and increased data volumes, adherence to the same (traditional) rigorous scientific standards must nevertheless still be applied, both in collection and analysis. Thus, in the human – machine analytic enterprise of today and tomorrow, it is important to be aware of the potential threat from bias, as well as the potential threat from non-adherence to scientific reasoning, because both of these phenomena may have implications for the validity of output from human – machine analytic processes.

SPEAKER BIO: Tore Pedersen is Associate Professor and Director of Center for Intelligence Studies at the Norwegian Defence Intelligence School (NORDIS). He is also visiting researcher at the Department of Informatics, University of Oslo, and affiliated Associate Professor at the Department of Psychology, Bjørknes University College. He is currently engaged in empirical research on cognitive aspects of the National Intelligence and Security domain.


AFSecurity is organised by the University of Oslo SecurityLab Logo-UiO-SecurityLab-colour.jpg