Tim OMahony Technical Support # Previouslyin Global Distributed - - PowerPoint PPT Presentation

tim o mahony
SMART_READER_LITE
LIVE PREVIEW

Tim OMahony Technical Support # Previouslyin Global Distributed - - PowerPoint PPT Presentation

Tim OMahony Technical Support # Previouslyin Global Distributed Perforce Dont do that do this! Living on the Edge Not just for multi-site, but everywhere. # # Provide warm standby servers Reduce load and


slide-1
SLIDE 1

#

Tim O’Mahony Technical Support

slide-2
SLIDE 2

#

  • Previously…in Global Distributed Perforce
  • Don’t do that… do this!
  • Living on the Edge
  • Not just for multi-site, but everywhere.
slide-3
SLIDE 3

#

slide-4
SLIDE 4

#

  • Provide warm standby servers
  • Reduce load and downtime on a primary server
  • Provide support for build farms
  • Alternative to Proxy in some places
slide-5
SLIDE 5

#

  • Pros

– Process commands locally – Metadata and Archive files – Great for a remote site that’s browsing and submitting little. – Great for offloading on local fast LAN sites

slide-6
SLIDE 6

#

  • Cons

– Forwards all write commands to the Master Server – Trade-off vs Proxy; requires a higher level of machine provisioning and administrative considerations.

slide-7
SLIDE 7

#

Duplication of Services and Metadata only to go back to the master when I need it locally

slide-8
SLIDE 8

#

slide-9
SLIDE 9

#

  • If that replica could handle 98% of things?
  • Regular commands happen on the replica
  • Reduce remote users time waiting for version

management tasks

slide-10
SLIDE 10

#

…cause Every Band, I mean Perforce Versioning Service, needs

  • ne (or two, maybe

three)

slide-11
SLIDE 11

#

  • Commit Server

– Stores the canonical archives and permanent metadata. Similar to a Perforce master server, but may not contain all workspace information.

  • Edge Server

– An edge server contains a replicated copy of the commit server data and a unique, local copy of some workspace and work-in- progress information. It can process read-only operations and

  • perations that only write to the local data
slide-12
SLIDE 12

#

  • Each edge server must be backed up separately

from the commit server.

  • Exclusive locks are global.
  • Shelved on an edge server are not usually shared

between edge servers.

  • You can promote a shelves in 14.1
  • Auto-creation of users is not possible
slide-13
SLIDE 13

#

  • Labels – global or local to edge
  • Triggers
  • Logs and Audits – Edge has it’s own
  • Unload depot may be different on the edge
  • Time Zone needs to be the same
  • Upgrade Commit and Edge at the same time.
slide-14
SLIDE 14

#

Benchmark of Perforce

  • perations with 128 ms

network latency between client and server. The file related commands

  • perated against 7,000

files

slide-15
SLIDE 15

#

slide-16
SLIDE 16

#

  • Options

– From Scratch – Utilize Existing Forwarding Replicas or Build Farms

  • Turn the Master into the Commit Server

– ServerID and use p4 serverid to save it – Server Spec Services: commit-server.

slide-17
SLIDE 17

#

  • Setup a replica
  • Services: edge-server
  • Take a filtered checkpoint

– p4d –r $P4ROOT -K db.have,db.working,db.resolve,db.locks,db.revsh,db.workingx,db.resolvex

  • jd -z filtered.gz
  • Restore & Start up the Edge
slide-18
SLIDE 18

#

  • Migrate Workspaces to the Edge

– Have users Submit/Revert

  • Unload the workspace

– p4 unload -c workspace

  • Reload the workspace on the edge

– p4 reload -c workspace -p protocol:host:port

  • protocol:host:port refers to the commit or remote edge

server the workspace is being migrated from.

slide-19
SLIDE 19

#

  • Run “p4 -Ztag info”

$ p4 -Ztag info … ... serverVersion P4D/DARWIN90X86_64/2014.1/821990 (2014/04/08) ... ServerID myEdge ... serverServices edge-server ... changeServer change.perforce.com.au:1666 ... serverLicense Perforce Software Pty Ltd 500 users (expires 2015/01/06) ... serverLicense-ip 127.0.0.1 ... caseHandling insensitive ... replica commit.perforce.com.au:1666 ... minClient 97.1: 1 .

slide-20
SLIDE 20

#

  • Triggers

– edge-submit

  • Like a pre-submit trigger

– edge-content

  • mid-submit trigger on the edge server
  • after file transfer from the client to the edge server
  • prior to file transfer to the commit server.
  • At this point, the changelist is shelved.
slide-21
SLIDE 21

#

  • Peeking (Improved concurrency through lockless

reads)

– p4 configure set db.peeking=2

  • Consider Filtering if in remote areas
  • Backup Strategies
  • Build Servers chained of the Edge Server
slide-22
SLIDE 22

#

slide-23
SLIDE 23

#

Local and distributed edge setup

slide-24
SLIDE 24

#

  • Lots of Edges

– Have the Commit just Commit

  • lbr.replication=share

– Leverage Same Storage Solution

  • Commit and Edge point to same Storage

– Automatic Promotion of Shelves

  • Clustered Perforce
slide-25
SLIDE 25

# Tim O’Mahony is a Technical Support Manager from the Australian office at Perforce. He has a wide and diverse knowledge of Perforce products, specializing in its server technology since 2004. Before joining Perforce, he focused on network simulation and Java programming.

slide-26
SLIDE 26

#

Tim O’Mahony tomahony@perforce.com

slide-27
SLIDE 27

#

slide-28
SLIDE 28

#

  • Why would I consider this model?

– When servers are in a Data Center – If primary Perforce server is very high spec. – With a large number of client workspaces – For very high number of transactions

slide-29
SLIDE 29

#

  • Delegates load from primary server
  • Transparent to end users
  • Provides failover
  • Capacity is easy to increase
  • Improved service levels to end users

– Increase capacity – Backup – Failover

slide-30
SLIDE 30

#

slide-31
SLIDE 31

#

slide-32
SLIDE 32

#

  • Packaged command line utility, p4cmgr

– Configuration – Control – Administration

  • Technologies

– Linux – Saltstack – Python – Apache Zookeeper

slide-33
SLIDE 33

#

  • p4cmgr <command> <options>
  • p4cmgr -- help
  • ptional arguments:
  • -help show this help message and exit

subcommands: {init,add,start,stop,restart,status,backup} init Initialise a new cluster and create a depot master add Add a service into a cluster start Start a service or services on a host or cluster stop Stop a service or services on a host or cluster restart Restart a service or services on a host or cluster status Get a simple debug style output for all nodes backup Perform a backup of the cluster

slide-34
SLIDE 34

#

  • p4cmgr init <cluster> <node> [-s <service>]

– Configures a new cluster – Installs salt-minion – Defines first Zookeeper – Deploys depot-master onto given node – Establishes baseline for subsequent Perforce servers

slide-35
SLIDE 35

#

  • p4cmgr add <type> <node>
  • Supported types

– Zookeeper – Depot standby – Workspace servers – Workspace router

  • Actions

– Installs salt minion – Deploys relevant components onto node

slide-36
SLIDE 36

#

  • p4cmgr start/stop

– Brings cluster up/down in correct sequence – router started last and stopped first

  • p4cmgr restart

– stop then start

  • p4cmgr status

– Prints composition, configuration and status – Verbose output

slide-37
SLIDE 37

#

  • p4cmgr backup

– Still open for business – Processing load delegated to standby

  • Admin checkpoint on standby
  • Journal rotate on master
  • Still need off-site o/s backups

– Checkpoint – Journal – Archives

slide-38
SLIDE 38

#

slide-39
SLIDE 39

# Darrell Robins is a Software Developer based in the Perforce UK office. He has been with Perforce since 2011, working mainly on web based projects such as OnDemand, Commons and Insights. Life before Perforce was a mixture of web, java and c programming.

slide-40
SLIDE 40

#

Darrell Robins drobins@perforce.com