Portal Replication Across Global Servers

I am creating a Portal site in the US. This same site needs to be accessed in Indonesia, and it has been suggested to replicate the Portal site in Indonesia considering download times, etc. I have replicated regular website, but never a Portal. What would this entail? What kind of replication procedures are provided or would be needed?
Regards,
Kendra

I don't recommend this approach. There are lots of references from one entry to another in the directory and so renaming the suffix is not that simple. In addition, the directory contains host names, domain names, and other information that is not going to be consistent with your Windows host. I would say it's possible, but probably not practical in a reasonable time frame.
The recommended way to do development with the Sun Portal is to create some build scripts that can be used for different environments. This way you can simply create a "build" and run your scripts on various deployment targets. This creates a repeatable process that avoids the types of problems you are running into.
Take a look at the ANT build scripts that are used to create the sample portals. There is a wealth of information there that will help you understand how to use scripts to create and deploy your portal. You will also have a really good understanding of the difference between Access Manager tasks and Portal Server tasks once you digest that information.
- Jim

Similar Messages

  • Capture performance metrics across multiple servers

    Hello. I'm still very new to Powershell but anyone know of a good Powershell v.3 -4 script that can capture performance metrics across multiple servers with an emphasis on HPC (high performance computing) and gen up a helpful report, perhaps in HTML or Excel
    format?
    Closest thing I've found and used is this line of powershell:
    http://www.microsoftpro.nl/2013/11/21/powershell-performance-monitor-on-multiple-remote-computers/
    Maybe figure out a way to present that in better format, such as HTML or Excel.
    Also, if someone can suggest some performance metrics to look at with an HPC perspective. For example, if a CPU is running at 100 utilization, figure out if which cores are running high, see how many threads are queued waiting for CPU time, etc...

    As far as formatting is concerned,
    ConvertTo-HTML is a basic HTML output format, but you can spice it up as much as you like:
    http://technet.microsoft.com/en-us/library/ff730936.aspx
    Out-Grid is very functional and pretty simple:
    http://powertoe.wordpress.com/2011/09/19/out-gridview-now-has-a-passthru-parameter/
    Here's an example with Excel:
    Excel
    Worksheets Example
    This might be a good reference for HPC, I don't have access to an HPC environment so I can't offer much advice there.
    http://technet.microsoft.com/en-us/library/ff950195.aspx
    It might be better to keep unrelated questions separate, so a thread doesn't focus on one question and you lose time getting an answer to another.
    I hope this post has helped!

  • IPTV load balancing across broadcast servers.

    I know that across Archive servers in the same cluster that IPTV control server will load balance , is there is a similar function with Broadcast servers. I know broadcast servers use a different delivery mechanism (Multicast). We have multiple broadcast servers that take in an identical live stream, but the only way to advertise thru a URL is a seperate URL per server. Is there some way to hide the multiple URL's to the client population?

    No. There is no way to load balance across multiple broadcast servers for live streams. Since this is going to be multicast, there should not be any additional load on the servers when the number of users are more.

  • Security info propagation across weblogic servers

    Hi,
    I have a requirement wherein my biz layer needs info abt logged in user profile. the web layer authenticates user using weblogic internal ldap and calls session bean of biz layer. In the biz layer i m able to retrieve the loggedin userid using sessionContext.getCallerPrincipal(). But i do need more info like user_preferences,emp_id as well. Is there a way of setting these attributes in Context obj which gets propagated across weblogic servers transparently. Otherwise i need to modify all my session bean api to accept userPreferencesDTO as additional parameter.
    Please advise,
    venkat

    have you find any solution for this...

  • Managing the Principle and Mirrored DB's across 2 Servers.

    We have mirroring set up across 2 servers with a witness server.  A1, A2 and W1 respectively.  We have 16 Databases.  Can half of the DB's on A1 be the Principle, while A2 is the Principle for the other half?   Basically
    I want to understand if I can sort of balance the DB's across the 2 servers.  Some DB's being Principle on A1 and others on A2.

    Hi,
    Yes, you can configure some DB's being Principle on A1 and others on A2. Because Database Mirroring works at a database level.
    Please note that on a 32-bit system, database mirroring can support a maximum of about 10 databases per server instance because of the numbers of worker threads that are consumed by each database mirroring session. However, these restrictions
    do not apply on 64-bit SQL Server systems.
    More information, please see:
    Mirroring Multiple SQL Server Databases on a Single Instance
    http://blogs.technet.com/b/rob/archive/2010/02/11/mirroring-multiple-sql-server-databases-on-a-single-instance.aspx
    Best Regards,
    Tracy
    Tracy Cai
    TechNet Community Support

  • Infrastructure Navigator discovery across vCenter Servers

    Is it possible configure VIN to discover dependencies across multiple vCenter Servers? It looks like each instance of VIN is associated with just one vCenter Server. Neither linked mode nor SSO multi-site mode seems to integrate discovery across vCenter Servers. If this is not possible in current version of VIN, is it something in the pipeline for next release? May be some sort of integration of lookup/inventory services to gain visibility across VCs?

    In linked mode, VIN shows the other VC's VMs as external IPs by design.
    VIN supports linked mode in a sense it discovers the environment of the particular VC although there are two environments (i.e. two VM lists) connected.
    In addition, in VIN's window settings you can use the drop box and select which VC to discover.
    If you experience any discovering issues, I suggest you to open a SR so our Support team can research the logs and refer it.
    Thanks,
    Nir

  • Replicating content across UCM servers

    Hi All,
    Is it possible to set-up content server replicate content across different content servers? Like CS1 replicate to CS2 and CS3. Need to know the sequence of actions to be taken/ steps performed for the same.

    It is certainly not a real-time replication.
    The Archiver, once configured, will create a zip batch containing the data and the content - its creation will certainly take some time depending on the amount of data to be processed. Then, the zip is transferred to the other environment (this can be another bottleneck, if the file is huge), and uploaded - it will take another time. Also, creation and upload will require some system resources (may decrease the system performance for human users).
    The Archiver is pretty self-explanatory, but you may also read the manual - http://docs.oracle.com/cd/E29542_01/doc.1111/e26692/part_migration.htm#CHDFEIDA
    If you have higher demands (e.g. real-time replication), please, elaborate on your use cases - you will certainly need other means, and their optimal version might depend on details.

  • Is it possible to share an ODBC.INI across Essbase servers?

    h5. Summary
    Our planned production environment may have as many as 15 Essbase servers so it would be useful if the odbc.ini file could be maintained centrally. Is this possible?
    h5. Problem
    The servers are implemented as a high availability cluster. The infrastructure team telll me that a side-effect of this is that the Essbase install builds the server node name into the path for EPM_ORACLE_HOME. Because this path has to be hard-coded in the odbc.ini Driver specification (see below) this apparently forces you to maintain a separate odbc.ini for each Essbase server.
    +Driver=/ htesb1 /u03/Oracle/Middleware/EPMSystem11R1/common/ODBC-64/Merant/6.0/lib/ARora24.so+
    h5. Requirement
    As part of the landscape we will have an OCFS shared file system mounted across all the Essbase servers. What I'd like to be able to do is place a single odbc.ini file on this file system and for that file to be used by all the Essbase nodes. I've tried using an environment variable in the Driver path specification. I've also tried using a symbolic link.
    Does anybody know of a way to avoid the hard-coded paths in the odbc.ini?
    h5. Alternative
    Another possibility is to use OCI instead though I saw in another thread that this doesn't work with parallel loads. Assuming this problem can be got around it means that the connection information will be consistent regardless of the essbase server and the odbc.ini maintenance problem then goes away. The catch is that the connection information is then embedded in the rules file meaning that we have to update same whenever they are moved between environments (e.g. dev to test) and whenever an environment's connection details change. I tried using substitution variables but these only apply to ODBC connection names. Does anyone know if there is some way to parameterise the OCI connection information?
    h5. Versions
    Essbase Release 11.1.2 (ESB11.1.2.1.102B147)
    Linux version 2.6.32.12-0.7-default (geeko@buildhost) (gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) ) #1 SMP 2010-05-20 11:14:20 +0200 64 bit
    Edited by: blackadr on 20-Jun-2012 16:45

    I had the same question some time ago. Perhaps you can modify the suggestion given to me to meet your needs? Here's the thread:
    http://discussions.apple.com/thread.jspa?threadID=342589&tstart=0

  • Splitting up a job for 140k+ users across multiple servers

    Hello, 
    I am pretty new to Powershell and want to learn more about scaling stuff and just started working with jobs.
    In this particular case I am just doing mass enable or disable at a per user level.  The other script I need to do this with grabs and checks values on around 6000 distribution groups and using the current values and type it creates new commands
    to add/remove certain users or permissions in bulk with Invoke-Expression.  I *think* it would probably be best in my case to run these across servers as well.
    Basically what I am looking at is:
    Using one large list/array, counting it, splitting it, using the resources it has available with jobs.
    One of the problems I have had with this but seems I have mostly figured out is how I combine or 'foreach' several different values that may need to be applied to separate objects on certain servers with certain users and certain attributes. 
    Last night I ran the first script that could do that but it took me awhile and looks like a wreck I am sure - but it worked!
    Now to tackle size.
    Thank You

    Hi Paul,
    looking good so far. Did a little rewrite of what you posted:
    Function Disable-Stuff
    Param (
    [Parameter(Position = 0, Mandatory = $true)]
    [string]
    $file,
    [Parameter(Position = 1)]
    [ValidateSet('CAS', 'MBX', 'ALL')]
    [string]
    $servertype = "CAS"
    # Collect server lists
    $servers = @()
    switch ($servertype)
    "CAS" { $servers += Get-ClientAccessServer | Select -ExpandProperty name }
    "MBX" { $servers += Get-MailboxServer | select -ExpandProperty name }
    "ALL"
    $servers += Get-ClientAccessServer | Select -ExpandProperty name
    $servers += Get-MailboxServer | select -ExpandProperty name
    # Remove duplicate names (just in case)
    $servers = $servers | Select -Unique
    default { }
    # Calculate set of operations per server
    $boxes = ($servers).count
    $content = Get-Content $file
    $split = [Math]::Round(($content.count / $boxes)) + 1
    # Create index counter
    $int = 0
    # Split up task
    Get-Content $filepath -ReadCount $split | ForEach {
    # Store file content in variable
    $List = $_
    # Select Server who does the doing
    $Server = $servers[$int]
    # Increment Index so the next set of objects uses the next Server
    $int++
    # Do something amazing
    # ... <-- Content goes here
    Disable-Stuff "c:\job\disable.txt" "CAS"
    Notable changes:
    Removed the test variables out of the function and added them as parameters
    Modified the Parameters a bit:
    - $file now is mandatory (the function simply will not run without it)
    - The first parameter will be interpreted as the file path
    - The second parameter will be interpreted as Servertype
    - $Servertype can only be CAS, MBX or ALL. No other values accepted
    - $Servertype will be set to CAS unless another servertype is specified
    you if/ifelse/else construct has been replaced with a switch (I vastly prefer them but they do the same functionally
    I removed the unnecessary temporary storage variables.
    Appended a placeholder scriptblock at the end that shows you how to iterate over each set of items and select a new server each time.
    I hope this helps you in your quest to conquer Powershell :)
    Cheers,
    Fred
    There's no place like 127.0.0.1

  • Portal Replication

    Hi ,
    I have a portal deployed in a solaris server.If I have to get the exact copy in another portal server deployed in another machine , can I adopt the following steps -
    The directory tree in both the servers are different.
    Shall I export the portal into a par file and import it new instance? Is it that simple or are there any steps that I need to follow?
    Regards,
    Vivek

    I don't recommend this approach. There are lots of references from one entry to another in the directory and so renaming the suffix is not that simple. In addition, the directory contains host names, domain names, and other information that is not going to be consistent with your Windows host. I would say it's possible, but probably not practical in a reasonable time frame.
    The recommended way to do development with the Sun Portal is to create some build scripts that can be used for different environments. This way you can simply create a "build" and run your scripts on various deployment targets. This creates a repeatable process that avoids the types of problems you are running into.
    Take a look at the ANT build scripts that are used to create the sample portals. There is a wealth of information there that will help you understand how to use scripts to create and deploy your portal. You will also have a really good understanding of the difference between Access Manager tasks and Portal Server tasks once you digest that information.
    - Jim

  • WDA application differs across application servers

    Hi,
    when transporting from dev to Q, with Q running on 4 different app servers, we see different behaviour of our WDA application. This manifests itself as (for instance) columns of tables not being in the same positions, and also different column widths within the same tables.
    It may be down to the different app servers running different versions of the WDA application, but then my question would be: how do we ensure all app servers are "synchronized" whenever we release a new transport? As of now, the users report conflicting test results based on the app behaving differently depending on which app server they happen to be routed to...
    OSS doesn't seem to hold any clues to this kind of behaviour. Would a simple flush of the browser cache be sufficient, or is this to do with the app servers themselves (cache, memory, whatever...?)
    Regards,
    Trond

    I don't believe that the app servers holding different versions of a program, it should be same across the systems.
    Pl check with the following:
    Column's width changes based on its content if no width is set to the column.
    also,
    Users can always personalize the column positions, if personalization is set.
    Thanks
    Abhi

  • Management Store Replication to Edge Servers

    I have an issue with replication of the Management Store to Edge Servers.
    In four separate countries I have a Enterprise FE pool and an Edge Pool segregated by a firewall in each country. Only in one country does the Management Store replicate to the Edge Server successfully (see below).
    The bottom two entries with a status of "True" are for the FE and Edge that replicate OK.
    From all the FE servers I can telnet to to their partner Edge server on port 4443 and I can browse to the replication service on https://servername.fqdn:4443/replicationwebservice. The certificates look fine too.
    If I run the Lync Server logging tool on the failing Edge servers and force replication from a FE server with Invoke-CsManagementStoreReplication there is nothing showing up in the XDS_Replica_Replicator log at all. If I do the same on the good Edge server
    I get a whole bunch of stuff in the log.
    I thought maybe it was a firewall issue but I have subsequently opened up the source IP range on my firewall rules to allow all to speak to the Edge servers' internal interface on 4443. Still nothing.
    From the timestamps in the above screenshot you can see that the Edge severs have at least once reported back to the FE servers as the LastStatusReport value is not null but you can also see that that was a long time ago.
    Any ideas?

    By any chance you see SHENNEL errors in Eventviewer of the Edge server?. I've see the exact thing happening when the edge internal certificate is not trusted by the Front End server.
    http://thamaraw.com
    I get a couple of Schannel errors regarding TLS 1.2 but then I get the exact same errors on the Edge server that replicates OK so I don't think that's an issue. Also if the FE didn't trust the cert of the Edge surely I wouldn't be able to browse to the replication
    web service on the Edge, which I can?

  • Managing Users Across Numerous Servers

    Hey all, got a question.
    I have 6 servers that my company uses to host all of the media we use on a daily basis. We're tightening security around here, so I'm setting them up to allow each of our 80 users to admin their own passwords. Seems simple, yeah?
    Well, I first tried to set up an Open Directory server to do the user and password managing. It is/was a 10.5.2 Server. All of the servers pointing to it are 10.4.11. Worked okay when I first implemented it, but then crashed and burned horribly. Got some really cryptic errors on the 10.5 side, and it basically stopped working. Because we can't afford ANY downtime at my company, I scrapped it, and am doing it on a server level.
    I have the first server up and working. Everyone has a login, and everyone can admin their own passwords. So, this leaves me with five more servers to set up.
    I don't really want everyone to have to go through the same steps to changs their passwords for every one of those.
    I know that you can export user lists from the WGM, but it doesn't retain passwords.
    Any thoughts on how I should proceed?
    Thanks in advance!

    The "Password Server" you are looking for is there in the Open Directory Master.
    Set up the user accounts, including user-changeable passwords on the Open Directory Master, and Mac OS X Server software will do all the replication and updating to all the other Open Directory Replicas. You only need enter the User Info once, on the Open Directory Master, and it will be quickly and automatically replicated to all the Open Directory Replicas. The users can change their passwords and any other attributes at will, and all User info on all Servers are updated quickly. (With six servers, you can set it to be essentially instantly.)
    For this to work as expected, the User Accounts should be created with Workgroup Manager in a Network Accessible Shared Directory that has a Network Mount record. Then the Users are Network Users, rather than Local Users, and can log in from any Server or Workstation and use the resources their Username and Group allows them.
    Six servers could easily support login (but probably not the file volumes you are serving) for over a thousand Users. There is no way a person could keep that much info updated if the account info had to be manually replicated. For your network, you would be using features developed for Server networks with thousands of Users, to service your handful of users.
    This Open Directory Master/Replica setting only refers to Open Directory info. There is no need for any thing to be manually replicated on each Server. They can and should have their own unique information in other areas.
    Message was edited by: Grant Bennet-Alder

  • MSCS H/A enqueue, replication and message servers services

    Hi,
      I'm looking at the documentation about "Installation of multiple SAP Systems in MSCS: MSSQL Server" but I have a few questions about the SCS and ASCS process.
    Are the process setup as generic cluster service?  Or are they SAP provided cluster services?
    What monitoring is done with the the enqueue, replication and message server services in the cluster?

    Mike,
    The SAPINST installer will install SCS and ASCS services into MSCS for you.  They are installed inside the SAP group in cluster.
    The message servers, controlled by either SCS is HA with MSCS.  The enqueue services is HA with installation of ERS locally on both nodes.  SCS + ERS + ERS provides a triangular HA that means the services is always on and can withstand an cluster failover.  The cluster only controls the SCS service.  The ERS services are local to each server.
    Let me know if I did not answer the question.
    jwise

  • ACE module not load balancing across two servers

    We are seeing an issue in a context on one of our load balancers where an application doesn't appear to be load balancing correctly across the two real servers.  At various times the application team is seeing active connections on only one real server.  They see no connection attempts on the other server.  The ACE sees both servers as up and active within the serverfarm.  However, a show serverfarm confirms that the load balancer sees current connections only going to one of the servers.  The issue is fixed by restarting the application on the server that is not receiving any connections.  However, it reappears again.  And which server experiences the issue moves back and forth between the two real servers, so it is not limited to just one of the servers.
    The application vendor wants to know why the load balancer is periodically not sending traffic to one of the servers.  I'm kind of curious myself.  Does anyone have some tips on where we can look next to isolate the cause?
    We're running A2(3.3).  The ACE module was upgraded to that version of code on a Friday, and this issue started the following Monday.  The ACE has 28 contexts configured, and this one context is the only one reporting any issues since the upgrade.
    Here are the show serverfarm statistics as of today:
    ACE# show serverfarm farma-8000
    serverfarm     : farma-8000, type: HOST
    total rservers : 2
                                                    ----------connections-----------
           real                  weight state        current    total      failures
       ---+---------------------+------+------------+----------+----------+---------
       rserver: server#1
           x.x.x.20:8000      8      OPERATIONAL  0          186617     3839
       rserver: server#2
           x.x.x.21:8000      8      OPERATIONAL  67         83513      1754

    Are you enabling sticky feature? What kind of predictor are you using?
    If sticky feature is enabled and one rserver goes down, traffic will leans to one side.
    Even after the rserver retuns to up, traffic may continue to lean due to sticky feature.
    The behavior seems to depend on the configuration.
    So, please let me know a part of configuration?
    Regards,
    Yuji

Maybe you are looking for