empirical evaluation of search algorithms for satisficing
play

Empirical Evaluation of Search Algorithms for Satisficing Planning - PowerPoint PPT Presentation

Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work Empirical Evaluation of Search Algorithms for Satisficing Planning Patrick von Reth Department of Mathematics and Computer Science Artificial


  1. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work Empirical Evaluation of Search Algorithms for Satisficing Planning Patrick von Reth Department of Mathematics and Computer Science Artificial Intelligence Group University of Basel 2/9/2015

  2. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work GBFS Best-first search: f ( s ) to find the most promising state to expand. GBFS: f ( s ) = h ( s )

  3. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work Misleading heuristics Exploration of states not leading to a goal. Plateaus: Many states are explored. No improvement of h ( s ). Random Exploration: Local Exploration: Explore random States Start a search on a limited from the open list. subset of states.

  4. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work Search enhancements Deferred evaluation: Preferred Operators: States are inserted with the heuristic value of their Operators most probable parent. part of a solution. Evaluated when they are Alternate open lists. explored.

  5. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work ǫ -GBFS Extension of standard GBFS. Probability ǫ select a state uniformly randomly from the open list. Probability 1 − ǫ use standard behaviour of GBFS.

  6. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work Type-based exploration States are inserted into buckets based on h ( s ) , g ( s ) , const (1) , ... . Buckets are selected uniformly randomly as well as the states in the buckets. Used alternating with a standard open list.

  7. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work Enforced hill climbing Standard GBFS until a better h ( s ′ ) value is found or the search fails. Run a new GBFS on state s ′ .

  8. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work Monte-Carlo random walks Random exploration: Multiple random walks: Random operators Configurations: are applied. Helpful actions Only the end point is Dead end avoidance evaluated. Iterative deepening The path providing the Acceptable progress best improvement is added to the global path.

  9. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work Local exploration Start a standard GBFS. If the heuristic value was not improved over a period of steps, start a local search. Depth of local search is limited. Close list is shared. Local search ends if: the configured depth is reached. a state s ′ with h ( s ′ ) < h ( s ) is found. the local search fails, the local open list is empty. Remaining states are merged. Alternate configuration: Local Random Walks

  10. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work Diverse best-first search Global open list: Probabilistic selection of states, based on their h ( s ) and g ( s ) value. Smaller g ( s ) and h ( s ) are preferred. Local open list: Standard open list. Only local searches. Local search is limited by the initial h ( s ) . Remaining states are merged into the global open list. Next local search is started.

  11. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work Experiments All experiments were run on the same benchmark sets as in the original papers. Results named base are those of a standard GBFS. original results 528 589 our results our results of a second implementation 589 0 650 Coverage sum

  12. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work ǫ -GBFS base 528 base 589 base 589 0.00 588 0.00 584 0.05 578 0.05 607 0.05 618 0.10 581 Results: 0.10 599 Scale similar. 0.10 616 0.20 585 Two implementations: 0.20 596 0.20 621 Bucket based 0.30 584 Heap based 0.30 599 0.30 608 FIFO by ID. 0.50 574 0.50 581 0.50 602 0.75 546 0.75 522 0.75 581 500 650 Coverage sum

  13. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work ǫ -GBFS RandomOpenList time usage (seconds) 10 4 10 4 10 3 10 2 10 1 10 0 10 − 1 10 − 2 10 − 2 10 − 1 10 0 10 1 10 2 10 3 10 4 10 4 RandomBucketOpenList time usage (seconds) Action RandomBucketOpenList RandomOpenList Insert state O (1) O ( log ( n )) Remove random state O ( m ) O ( log ( n )) Remove min state O (1) O ( log ( n ))

  14. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work type-based-GBFS ff-base 1561 Results: ff-base 1612 Results scale similar. ff-typed 1755 ff-typed 1785 Implementation: cea-base 1498 cea-base 1530 Reduced complexity cea-typed 1678 O (1) instead of O ( m ) to cea-typed 1719 the number of buckets. cg-base 1513 cg-base 1538 Vector containing cg-typed 1691 buckets. cg-typed 1694 Map pointing to 1400 1800 buckets. Coverage sum

  15. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work type-based-GBFS: Multiple heuristics ff-base 1561 1612 ff-base 1 1529 ff-cea-g, ff-cg-cea-g, ff-cg-g 1 1735 g are additions on our side. 1758 g 1725 Longer keys lead to more ff 1729 1690 ff evaluations resulting in ff-g 1755 ff-g 1787 worse results. ff-cea-g 1691 Even the const(1) performs ff-cg-cea-g 1661 ff-cg-g 1723 better. 1400 1800 Coverage sum

  16. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work Monte-Carlo random walks Results: Number estimated from base 214 percentage results. base 234 Good MHA results. pure 282 pure 230 Implementation: MDA 205 Support for multiple MHA 237 configurations pure-no-accaptable-progress 248 Helpful actions 0 250 Coverage sum Dead end avoidance Iterative deepening Acceptable progress

  17. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work Local exploration ff-base 1561 ff-base 1612 Results: ff-local 1657 The results scale similar ff-local 1700 cg-base 1513 to the original results. cg-base 1540 cg-local 1602 Implementation: cg-local 1600 Abstract wrapper cea-base 1498 cea-base 1528 Combinations of cea-local 1603 different search engines cea-local 1607 possible. 1400 1750 Coverage sum

  18. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work DBFS Results: Good results. ff-base 1209 ff-base 1228 Bad results for deferred ff-diverse 1451 evaluation. ff-diverse 1440 cg-base 1170 Implementation: cg-base 1207 Three open lists: cg-diverse 1358 cg-diverse 1397 DiverseOpenList 1202 cea-base ProbabilisticOpenList (global cea-base 1223 1388 cea-diverse open list) cea-diverse 1451 Any open list (local open list) ff-diverse-lazy 1222 ProbabilisticOpenList modified algorithm 1100 1500 Coverage sum Only iterate over existing values.

  19. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work Comparison Comparison of all algorithms. On IPC 2011 benchmarks. Standard (eager) search. Deferred (lazy) search where applicable.

  20. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work Eager All new algorithms improve results compared to base 192 standard GBFS. DBFS 224 GBFS-LS 200 Random walks and EHC e-GBFS 214 can not compete with the type-based-GBFS 213 current algorithms. EHC 104 118 Monte-Carlo random walks Simple randomisation leads 0 250 to a similar improvement Coverage sum ( ǫ -GBFS, type-based-GBFS).

  21. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work Lazy base 197 DBFS 156 Deferred evaluation leads GBFS-LS 193 e-GBFS 207 to worse results in most type-based-GBFS 218 cases. 0 250 Coverage sum

  22. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work Conclusion All algorithms perform as good as announced. Simple randomisation can massively improve the results. For ǫ -GBFS improvements showed their potential.

  23. Background Algorithms Implementation & Experiments Comparison Conclusion & Future Work Future Work Try to combine. Try new configurations. We could try a single bucket randomisation with the alternating open list. Optimise. Comparison on a bigger benchmark set.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend