Software Agents in Virtual Organizations: Good Faith and Trust - - PowerPoint PPT Presentation

software agents in virtual organizations good faith and
SMART_READER_LITE
LIVE PREVIEW

Software Agents in Virtual Organizations: Good Faith and Trust - - PowerPoint PPT Presentation

Software Agents in Virtual Organizations: Good Faith and Trust Francisco Andrade, Paulo Novais, Jos Machado and Jos Neves Universidade do Minho - Portugal PROVE 2008 Poznan, Poland September, 9 th Introduction Virtual organizations


slide-1
SLIDE 1

Software Agents in Virtual Organizations: Good Faith and Trust

Francisco Andrade, Paulo Novais, José Machado and José Neves

Universidade do Minho - Portugal

PRO’VE 2008 Poznan, Poland September, 9th

slide-2
SLIDE 2

2

  • PRO-VE 2008

Introduction

Virtual organizations tend to play an increasing part in

electronic commerce

And so do software agents It must be considered the capability of software agents to

rationally and autonomously “think” and decide

Software agents behaviour

will become less and less predictable – their will choose their own strategies and define their own planning

Actually, they may act with good faith or with bad faith It is thus required an analysis of the major issue of trust in

software agent’s environments

slide-3
SLIDE 3

3

  • PRO-VE 2008

Introduction (II)

Software agents may play a relevant role in

virtual enterprises:

Temporary alliances of organizations intended to

share skills, core competencies and resources in

  • rder to better responded to business opportunities

This software agents intervention intends

interactions based on contracts and relations of trust

slide-4
SLIDE 4

4

  • PRO-VE 2008

Software Agents

But software agents not only operate without

the direct intervention of humans

But they also have some control over their actions

and inner states

Mental states - their behaviour is a product of

reasoning processes over incomplete or unknown information – agents do make options and their behaviour can not be fully predicted

slide-5
SLIDE 5

5

  • PRO-VE 2008

Trust

Is it possible to trust software agents in business

relations?

Trust is mainly a belief in the honesty or reliability of

someone

Clearly a requisite of the utmost importance when deciding on

“how, when and who to interact with”

It can not be assumed in advance that software agents will

behave according to rules of honesty and correctness

slide-6
SLIDE 6

6

  • PRO-VE 2008

Good Faith and Trust

Software agents (as humans) may act with

good faith or bad faith

Importance of reliability in electronic relations

Need for protocols that ensure that the actors will

find no better option than telling the truth and interacting honestly – issues of trust are at stake

slide-7
SLIDE 7

7

  • PRO-VE 2008

Trust

Two different approaches of trust Subjective

trust and objective trust

Subjective or individual perspective – an agent has

some beliefs about the honesty or reciprocities of his interaction partners;

Objective, social or systemic perspective – the

actors in the system are forced to be trustworthy by the rules of encounters (protocols and mechanisms that regulate the system).

slide-8
SLIDE 8

8

  • PRO-VE 2008

Trust (II)

At individual level – trust arises from learning (agents

do learn from experience ) and from reputation or socio cognitive models (belief that someone is competent or willing to do something)

At the system level – trust can be ensured by

constraints imposed by the system

Using certain protocols Using guarantees of reliability through the references of a

trusted third party

slide-9
SLIDE 9

9

  • PRO-VE 2008

Behaviour

Besides the question of the legal consideration

  • f software agents (as tools, as legal persons,

and so on...)

it is unavoidable to consider the issue of the Behaviour of software agents Can such agents “know the Law” and social standards of behaviour and abide to its rules?

slide-10
SLIDE 10

10

  • PRO-VE 2008

Good Faith

Software agents and good faith

Good faith is related to the ideas of fidelity,

loyalty, honesty and trust in business

Good faith may be understood

In a psychological subjective sense In an ethical objective sense

slide-11
SLIDE 11

11

  • PRO-VE 2008

Good Faith (II)

Good faith in objective sense

It relates to social norms and legal rules It consists in considering correct behaviour and not

actor’s mental attitudes Good faith in subjective sense

It has to do with knowledge and belief It regards the actor’s sincere belief that he/she is

not violating other people’s rights

slide-12
SLIDE 12

12

  • PRO-VE 2008

Good Faith (III)

Good faith arises from general objective

criteria related to loyalty and cooperation between parties

Good faith is an archetype of social behaviour

Loyalty in social relations, honest acting, fidelity,

reliability, faithfulness and fair dealing

Including the protection of reasonable reliance

slide-13
SLIDE 13

13

  • PRO-VE 2008

Bad Faith

Acting with bad faith may lead to:

Invalidation of some of the contractual clauses or

the whole contract

Eventually originate liabilities

It must be considered that software agents acting in business relations will presumably act according to certain standards of behaviour Yet, an agent’s behaviour can be based both on its experience and on the built-in-knowledge

slide-14
SLIDE 14

14

  • PRO-VE 2008

Bad Faith (II)

But autonomous systems will produce a

behaviour determined by their previous experiences

And, furthermore, agents will be able to act

strategically

Calculating the best responses given the opponents

possible moves

slide-15
SLIDE 15

15

  • PRO-VE 2008

Standards

Good faith criteria relate to objective standards

  • f conduct

These standards will help determine whether

(or not) the agent has observed reasonable commercial standards of fair dealing in the negotiation and performance of the contract

(there are both positive and negative

requirements to be fulfilled)

slide-16
SLIDE 16

16

  • PRO-VE 2008

Attribution

Quite important is the issue of the attribution of

the acts:

Should the acts of software agent be attributed to

the user ? (software agent as an instrument or tool – someone having control over the SA ?)

Should the volition of the SA be autonomously

considered (the user may not have been directly involved or consulted ; the user may not even be aware that the agent acted at all )

slide-17
SLIDE 17

17

  • PRO-VE 2008

Autonomous Will

Regardless of considering SAs as tools or as

legal persons (in a near or far future)

  • we probably should not rely on a legal fiction
  • f attribution of the acts of software agents to

humans

The autonomous will of the agents must be

considered for purposes of:

Good and bad faith Divergences between will and declaration Defects of the will

slide-18
SLIDE 18

18

  • PRO-VE 2008

Trust

The fact that SA’s act with good or with bad

faith

Is of the utmost importance for all those (Humans or

SAs) that have to deal with SAs

Agents acting with good faith is also a relevant

element of trust

Trust will be a determinant and unavoidable

question for agents contracting

slide-19
SLIDE 19

19

  • PRO-VE 2008

Trust

Trust is intimately related to beliefs Trust that the other party will be honest and reliable

(that the other party will do what it says it will do)

We can distinguish different approaches of trust

At the individual level (an agent has some beliefs about the

honesty or reciprocities nature of its interaction partners)

At the system level (the actors in the system are forced to be

trustworthy by the rules of encounter – protocols and mechanisms that regulate the system)

slide-20
SLIDE 20

20

  • PRO-VE 2008

Smart Contracts

Quite interesting is the figure of “smart

contracts”

Set of promises, specified in digital form, including

protocols in which the parties perform on these promises

These contracts are actually program codes

imposing by itself an enforcement of the contract (the terms of the contract are enforced by the logic

  • f the program’s execution)
slide-21
SLIDE 21

21

  • PRO-VE 2008

Smart Contracts

The perspective of smart contracts try to escape the

difficulties of enforcement

Instead of enforcement, the contract creates an inescapable

arrangement Smart contracts can enhance trust in electronic contracting The use of contracts as games – games have rules (either fixed

  • nes or a set of rules players will choose) and can only be

played if the players abide to the rules – rules have to be followed in order to play the game

slide-22
SLIDE 22

22

  • PRO-VE 2008

Smart Contracts

The contract as an electronic game,

managed or arbitrated by a board manager – a trusted third party – who does not play the game himself but only allows the parties to make legal moves

(the referee or board manager may be

either a human or a software agent)

slide-23
SLIDE 23

23

  • PRO-VE 2008

Conclusion

Trust at the system level may be enhanced by different

mechanisms:

Special interaction protocols (as smart contracts) Reputation mechanisms Security mechanisms

Authentication by trusted third parties information about actors in the system specially delivered by trusted third parties – participants will act upon what they think is trustable information Networks of trust

slide-24
SLIDE 24

24

  • PRO-VE 2008

Conclusion (II)

Trust will be highly dependent on the existence

  • f social networks and on the traceability of

past interactions among the agents

Technical difficulties (data log) But Trust is fundamental for the use of

software agents in electronic commerce and in virtual organizations

slide-25
SLIDE 25

25

  • PRO-VE 2008

Acknowledgements

The work described in this paper is included in Intelligent Agents and Legal Relations project (POCTI/JUR/57221/2004), which is a research project supported by FCT (Science & Technology Foundation – Portugal).

slide-26
SLIDE 26

Software Agents in Virtual Organizations: Good Faith and Trust

Francisco Andrade, Paulo Novais, José Machado and José Neves

Universidade do Minho - Portugal

PRO’VE 2008 Poznan, Poland September, 9th