software agents in virtual organizations good faith and
play

Software Agents in Virtual Organizations: Good Faith and Trust - PowerPoint PPT Presentation

Software Agents in Virtual Organizations: Good Faith and Trust Francisco Andrade, Paulo Novais, Jos Machado and Jos Neves Universidade do Minho - Portugal PROVE 2008 Poznan, Poland September, 9 th Introduction Virtual organizations


  1. Software Agents in Virtual Organizations: Good Faith and Trust Francisco Andrade, Paulo Novais, José Machado and José Neves Universidade do Minho - Portugal PRO’VE 2008 Poznan, Poland September, 9 th

  2. Introduction � Virtual organizations tend to play an increasing part in electronic commerce � And so do software agents � It must be considered the capability of software agents to rationally and autonomously “think” and decide � Software agents behaviour will become less and less predictable – their will choose their own strategies and define their own planning � Actually, they may act with good faith or with bad faith � It is thus required an analysis of the major issue of trust in software agent’s environments - PRO-VE 2008 2

  3. Introduction (II) � Software agents may play a relevant role in virtual enterprises: � Temporary alliances of organizations intended to share skills, core competencies and resources in order to better responded to business opportunities � This software agents intervention intends interactions based on contracts and relations of trust - PRO-VE 2008 3

  4. Software Agents � But software agents not only operate without the direct intervention of humans � But they also have some control over their actions and inner states � Mental states - their behaviour is a product of reasoning processes over incomplete or unknown information – agents do make options and their behaviour can not be fully predicted - PRO-VE 2008 4

  5. Trust � Is it possible to trust software agents in business relations? � Trust is mainly a belief in the honesty or reliability of someone � Clearly a requisite of the utmost importance when deciding on “how, when and who to interact with” � It can not be assumed in advance that software agents will behave according to rules of honesty and correctness - PRO-VE 2008 5

  6. Good Faith and Trust � Software agents (as humans) may act with good faith or bad faith � Importance of reliability in electronic relations � Need for protocols that ensure that the actors will find no better option than telling the truth and interacting honestly – issues of trust are at stake - PRO-VE 2008 6

  7. Trust � Two different approaches of trust Subjective trust and objective trust � Subjective or individual perspective – an agent has some beliefs about the honesty or reciprocities of his interaction partners; � Objective, social or systemic perspective – the actors in the system are forced to be trustworthy by the rules of encounters (protocols and mechanisms that regulate the system). - PRO-VE 2008 7

  8. Trust (II) � At individual level – trust arises from learning (agents do learn from experience ) and from reputation or socio cognitive models (belief that someone is competent or willing to do something) � At the system level – trust can be ensured by constraints imposed by the system � Using certain protocols � Using guarantees of reliability through the references of a trusted third party - PRO-VE 2008 8

  9. Behaviour � Besides the question of the legal consideration of software agents (as tools, as legal persons, and so on...) it is unavoidable to consider the issue of the Behaviour of software agents Can such agents “know the Law” and social standards of behaviour and abide to its rules? - PRO-VE 2008 9

  10. Good Faith � Software agents and good faith � Good faith is related to the ideas of fidelity, loyalty, honesty and trust in business � Good faith may be understood � In a psychological subjective sense � In an ethical objective sense - PRO-VE 2008 10

  11. Good Faith (II) � Good faith in objective sense � It relates to social norms and legal rules � It consists in considering correct behaviour and not actor’s mental attitudes Good faith in subjective sense � It has to do with knowledge and belief � It regards the actor’s sincere belief that he/she is not violating other people’s rights - PRO-VE 2008 11

  12. Good Faith (III) � Good faith arises from general objective criteria related to loyalty and cooperation between parties � Good faith is an archetype of social behaviour � Loyalty in social relations, honest acting, fidelity, reliability, faithfulness and fair dealing � Including the protection of reasonable reliance - PRO-VE 2008 12

  13. Bad Faith � Acting with bad faith may lead to: � Invalidation of some of the contractual clauses or the whole contract � Eventually originate liabilities It must be considered that software agents acting in business relations will presumably act according to certain standards of behaviour Yet, an agent’s behaviour can be based both on its experience and on the built-in-knowledge - PRO-VE 2008 13

  14. Bad Faith (II) � But autonomous systems will produce a behaviour determined by their previous experiences � And, furthermore, agents will be able to act strategically � Calculating the best responses given the opponents possible moves - PRO-VE 2008 14

  15. Standards � Good faith criteria relate to objective standards of conduct � These standards will help determine whether (or not) the agent has observed reasonable commercial standards of fair dealing in the negotiation and performance of the contract � (there are both positive and negative requirements to be fulfilled) - PRO-VE 2008 15

  16. Attribution � Quite important is the issue of the attribution of the acts: � Should the acts of software agent be attributed to the user ? (software agent as an instrument or tool – someone having control over the SA ?) � Should the volition of the SA be autonomously considered (the user may not have been directly involved or consulted ; the user may not even be aware that the agent acted at all ) - PRO-VE 2008 16

  17. Autonomous Will � Regardless of considering SAs as tools or as legal persons (in a near or far future) we probably should not rely on a legal fiction � of attribution of the acts of software agents to humans � The autonomous will of the agents must be considered for purposes of: � Good and bad faith � Divergences between will and declaration � Defects of the will - PRO-VE 2008 17

  18. Trust � The fact that SA’s act with good or with bad faith � Is of the utmost importance for all those (Humans or SAs) that have to deal with SAs � Agents acting with good faith is also a relevant element of trust � Trust will be a determinant and unavoidable question for agents contracting - PRO-VE 2008 18

  19. Trust � Trust is intimately related to beliefs � Trust that the other party will be honest and reliable (that the other party will do what it says it will do) � We can distinguish different approaches of trust � At the individual level (an agent has some beliefs about the honesty or reciprocities nature of its interaction partners) � At the system level (the actors in the system are forced to be trustworthy by the rules of encounter – protocols and mechanisms that regulate the system) - PRO-VE 2008 19

  20. Smart Contracts � Quite interesting is the figure of “smart contracts” � Set of promises, specified in digital form, including protocols in which the parties perform on these promises � These contracts are actually program codes imposing by itself an enforcement of the contract (the terms of the contract are enforced by the logic of the program’s execution) - PRO-VE 2008 20

  21. Smart Contracts � The perspective of smart contracts try to escape the difficulties of enforcement � Instead of enforcement, the contract creates an inescapable arrangement Smart contracts can enhance trust in electronic contracting The use of contracts as games – games have rules (either fixed ones or a set of rules players will choose) and can only be played if the players abide to the rules – rules have to be followed in order to play the game - PRO-VE 2008 21

  22. Smart Contracts � The contract as an electronic game, managed or arbitrated by a board manager – a trusted third party – who does not play the game himself but only allows the parties to make legal moves � (the referee or board manager may be either a human or a software agent) - PRO-VE 2008 22

  23. Conclusion � Trust at the system level may be enhanced by different mechanisms: � Special interaction protocols (as smart contracts) � Reputation mechanisms � Security mechanisms Authentication by trusted third parties information about actors in the system specially delivered by trusted third parties – participants will act upon what they think is trustable information Networks of trust - PRO-VE 2008 23

  24. Conclusion (II) � Trust will be highly dependent on the existence of social networks and on the traceability of past interactions among the agents � Technical difficulties (data log) � But Trust is fundamental for the use of software agents in electronic commerce and in virtual organizations - PRO-VE 2008 24

  25. Acknowledgements The work described in this paper is included in Intelligent Agents and Legal Relations project (POCTI/JUR/57221/2004), which is a research project supported by FCT (Science & Technology Foundation – Portugal). - PRO-VE 2008 25

  26. Software Agents in Virtual Organizations: Good Faith and Trust Francisco Andrade, Paulo Novais, José Machado and José Neves Universidade do Minho - Portugal PRO’VE 2008 Poznan, Poland September, 9 th

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend