Frequent c2s failures

Hi,
BM3.9sp2ir1
NW6.5sp8(with post sp8 patches through April 17,2010)
eDir8.8sp4
We frequently are having problems with clients connecting to our system.
Most using the bm3xvpn12 client with NMAS.
Often have to try multiple times to get connected, sometimes restart the computer, sometimes it just doesn't want to work, but sometimes it will work just fine.
No rhyme or reason that I can find, same user and system will work and then not work. Though those with problems frequently have the problem.
When some have problems, others will be connected just fine, so it isn't just that the server is refusing to accept connections.
the last item shown before it times out on the IKE screen is:
IKE : Nmas user check authentication and traffic rule.
Client sits at the negotiating and authentication message for several minutes then fails with:
may be Invalid VPN Server or IKE not loaded.
No problems at all if I unload the filters, consistently get a VPN connection in under 15seconds.
Is this a possible filter issue?
I have run brdcfg to see that they are all applied. I only do filter work in Filtcfg, never in iManager.
I've run wireshark on the client side and have good and bad captures. everything seems to match up ok until it is in the port 500/4500 sections.
good:
2 cycles of port 500 communications, then switches to 4500 Wireshark Info column says: "Identity Protection (main mode)"
3 sends on 4500, then 3 receives on 4500 Wireshark Info column says: "Identity Protection (main mode)"
and then a send/receive/send on 4500 Wireshark info column says: "Quick Mode"
and then switches to UDP 353 ndsauth
and connected on the VPN.
bad:
2 cycles of port 500 communications, then switches to 4500 Wireshark Info column says: "Identity Protection (main mode)"
3 sends on 4500 Wireshark Info column says: "Identity Protection (main mode)"
NO reply from VPN server
3 sends on 500 Wireshark Info column says: "Identity Protection (main mode)"
No reply from VPN server
1 send on 4500 protocal UDPENCAP wireshare info column says NAT-keepalive
1 send on 500 Wireshark Info column says: "Identity Protection (main mode)"
4 replies on 4500 Wireshark Info column says: "Identity Protection (main mode)"
1 reply on 500 Wireshark Info column says: "Identity Protection (main mode)"
Then client sends on 500 to 4500 Wireshark Info column says: Informational
then back and forth port 500 sends show Informational, replies Identity Protection.
we also have similar problems connecting to another BM server in the same tree (different location)
BM3.8sp5
NW6.5sp5
eDir8.7.3.9

Ok, spoke too soon. just happened to work a couple times.
Server has read-only replicas of everything, plus is master of its own partition.
Turned on NMAS in DSTrace, any other options I should turn on when tracing? I don't see any SLP or NCP addresses.
In a failure, the trace is identical to a success, except that it appears the client doesn't recognize the success message.
Though there is a error in the traces: (both success and failure)
11:16:44 942A3060 NMAS: Accessing local replica of CN=PW Policy.CN=Password Policies.CN=Security
11:16:44 942A3060 NMAS: ERROR: -631 Failed set password for CN=user.OU=ou.O=o
11:16:44 942A3060 NMAS: 34: ERROR: -631 Server Module 0x00000007 Set Password
11:16:44 942A3060 NMAS: 34: ERROR: -631 MAF_SetPassword
11:16:44 942A3060 NMAS: 34: Server Module 0x00000007 Write
11:16:44 942A3060 NMAS: 34: Server Module 0x00000007 Read
11:16:46 942A3060 NMAS: 34: Server Module 0x00000007 Successful
11:16:46 942A3060 NMAS: 34: NDS Login Method Successful
11:16:46 942A3060 NMAS: 34: WhatNext
11:16:46 942A3060 NMAS: 34: Successful login
11:16:48 94201140 NMAS: 34: NMAS session succeeded
11:16:48 94201140 NMAS: 34: Client Session Destroy Request
11:16:48 94201140 NMAS: 34: Local Session Cleared (Not Destroyed)
11:16:48 942A3060 NMAS: 34: ERROR: -1645 Server timed out waiting for data
11:16:48 942A3060 NMAS: 34: Server thread exited
11:16:48 942A3060 NMAS: 34: Pool thread 0x8d43a600 work complete

Similar Messages

  • Frequent Hardware failure in HP-G Series Notebook

    Hi All
    Facing Frequent Hardware failure in HP-G Series Notebook.
    I took help from HP tech support as well but they could not solve the problem & I need to send the Notebook to the repair center.
    I just want to know is there a problem in whole G Series PC model or something else.

    I am also having hardware problems with my HP-G series notebook. I have two of them and they both have the same problem. Longing sound on startup and some of the keys on the keyboard stopped working. Any suggestions?

  • Frequent TM Failures

    I posted a thread a while back when I was getting sporadic failures. At that time, the general consensus was that 10.5.3 introduced some errors in TM. I'm at 10.5.6 now and have been since the update. But over the past week, I've been getting more frequent TM failures. They seem to be pretty consistent. Below is the info from the log file. The strange thing to me is the reference to "500 Gigger" b/c that is my Super Duper backup and has nothing to do with my TM backup. My TM backup drive is labeled "1TBTM". Any suggestions?
    3/21/09 5:51:09 PM mds[21] (/Volumes/500 Gigger/.Spotlight-V100/Store-V1/Stores/C3E5E22C-5296-4418-B36C-F364D997C440)(Er ror) IndexCI in ContentIndexOpenBulk:Could not open /Volumes/500 Gigger/.Spotlight-V100/Store-V1/Stores/C3E5E22C-5296-4418-B36C-F364D997C440/liv e.0.; needs recovery
    3/21/09 5:51:10 PM /System/Library/CoreServices/backupd[165] Backing up to: /Volumes/1TBTM/Backups.backupdb
    3/21/09 5:51:10 PM /System/Library/CoreServices/backupd[165] Error (-35): Getting volume path
    3/21/09 5:51:10 PM /System/Library/CoreServices/backupd[165] Couldn't initialize backup volume from path: /Volumes/1TBTM
    3/21/09 5:51:10 PM /System/Library/CoreServices/backupd[165] Error (-35): Couldn't get library parent ref /Volumes/1TBTM/Backups.backupdb. Does it exist?
    3/21/09 5:51:10 PM mds[21] (/Volumes/500 Gigger/.Spotlight-V100/Store-V1/Stores/C3E5E22C-5296-4418-B36C-F364D997C440)(Er ror) IndexStore in SIStoreDirytySDBChunks:Error storing dirty sdb pages: 30
    3/21/09 5:51:15 PM /System/Library/CoreServices/backupd[165] Backup failed with error: Couldn't get info on the target volume.

    PJGNC wrote:
    3/21/09 5:51:09 PM mds[21] (/Volumes/500 Gigger/.Spotlight-V100/Store-V1/Stores/C3E5E22C-5296-4418-B36C-F364D997C440)(Er ror) IndexCI in ContentIndexOpenBulk:Could not open /Volumes/500 Gigger/.Spotlight-V100/Store-V1/Stores/C3E5E22C-5296-4418-B36C-F364D997C440/liv e.0.; needs recovery
    This isn't a TM message. It's from process mds (MetaDataSearch) which is associated with Spotlight, among other things. But it does look like that disk could use a +Repair Disk+ via Disk Utility (in Applications/Utilities)
    3/21/09 5:51:10 PM /System/Library/CoreServices/backupd[165] Error (-35): Getting volume path
    3/21/09 5:51:10 PM /System/Library/CoreServices/backupd[165] Couldn't initialize backup volume from path: /Volumes/1TBTM
    3/21/09 5:51:10 PM /System/Library/CoreServices/backupd[165] Error (-35): Couldn't get library parent ref /Volumes/1TBTM/Backups.backupdb. Does it exist?
    3/21/09 5:51:15 PM /System/Library/CoreServices/backupd[165] Backup failed with error: Couldn't get info on the target volume.
    These are from TM (the backupd process). I've not seen these messages. I'd also do a +Repair Disk+ on it.
    And there's an easier way to see what TM backups are doing. Download the +Time Machine Buddy+ widget. It shows the messages from your logs for one TM backup run at a time, in a small window, without all the other gibberish.

  • Frequent heartbeat failure alerts on the server

    Hi Experts,
    we are getting the heartbeat failure alert  for xxxxxxx server. We have reinstalled the SCOM agent again on the server but still the alert is generating frequently
    Server is hosted on Cloud and we have verified the server resource utilization (CPU, Memory & network ) for the server.The utilization is normal and not finding any packet drop/connectivity issue for the server with SCOM gateway server. Please  suggest
    on this issue.
    Thanks in advance,
    25aish

    If the Windows agent is currently being monitored, and you have verified that by checking whether performance data is available (for example), then the best thing you can do is extend the heartbeat for that particular agent to something that is acceptable.
    In this case, if you are using the default heartbeat settings (which is 3 minutes), then just override the agent setting in Administration to allow up to something like 9 minutes. I actually suggest this for all environments right out of the box, because 3
    minutes is just way to aggressive. Check every 180 seconds, rather than the default 60 seconds...
    Jonathan Almquist | SCOMskills, LLC (http://scomskills.com)

  • Frequent load failures

    what are the frequent failures of info packages and what are the reasons for those failures.
    please help me out.
    i need suggessions from everyone.

    Madhusudhan
    There are Many reasons for failure of Info packages depends on the individual issues some of them are like
    User ALEROMOTE Loacked for master data's
    Data Sourcs has to be replicated
    Activation Failures
    Error occures in data Selection Etc.
    Thanks
    Sat

  • Most frequent heartbeat failure report

    hello,
    I want to create a report, or a sql query that outputs the top 10 servers with heartbeat failures in the past xx days.
    is there any native report in scom that does this, or a sql query that shows this?
    thanks.

    Thank you Jonathan. Point Noted. Can you please help me understand the difference it would make, so I could modify the other queries I use..
    Regards,
    Saravanan
    That's a great question.
    The first reason is that views may implement HINT options that the software developer deems necessary to preserved the integrity of the database and safeguard against lock conditions that may occur as opposed to ad-hoc table query with no HINT options. This
    is the main reason I always use views whenever possible - it simplifies the query, because I don't need to remember to include these options in my SELECT statement.
    Another reason is, the calls made from the application use views, so I figure I should too.
    Views also sometimes simplify more complex statements joining multiple tables. This isn't necessarily the case for ManagedEntity vs vManagedEntity, but it's still a practice I apply even if the view is a "mirror" of the table.
    For example, the vManagedEntity view includes the NOLOCK HINT option. If you lookup NOLOCK HINT practices, and when to use NOLOCK, it can get a little blurry in terms of impact on database performance. I just assume use the views that the vendor created, because
    they understand where HINT's should be used better than me. Otherwise, I might be causing problems that I'm not even aware of, impacting application internal processes.
    Borrowing from a
    thread on StackOverflow:
    "A view is an abstraction layer, and it does what any good abstraction layer does, including encapsulating the database schema and protecting you from the consequences of changing internal implementation details. It's an interface."
    Jonathan Almquist | SCOMskills, LLC (http://scomskills.com)

  • Iphone 5c frequent call failures

    Hi ,
    I have a iphone 5c (Model ME493LL/A - unlocked. Model A 1532 running iOS 8.1.3 Purchased in US and being used in India).
    In some areas in my offices/home  , I can not make any calls. I See 3-4 bars but I can not send SMS or make calls. But other phones around me (same network) have no such  problems. Unfortunately there are not many other iPhone fives around me. iPhone 3GS seems to have much less problems.   Of late this is turning out to be a big embarrassment and headache.
    Some times turning on/off celular data helps. Some times going into airplane mode and coming back helps but not always.
    Appreciate any help.
    My carrier does not support the LTE band used by  this phone . So I disabled it.
    Thanks

    Hello. I suggest you can back up your iPhone and restore it to try. The methods as below :
    Use iTunes to restore your iOS device to factory settings - Apple Support
    PS: Do Remember to back up at first..
    Regards,
    Anson

  • Frequent intermittent failure to connect

    My browser(s) regularly fail to connect to any site (email is the same) at the first attempt. Sometimes the refresh command is needed several times before it connects. Having connected it may then fail again at any page change. The Home hub has all the lights on at all times, it is connected by ethernet cable. Any ideas to fix this?

    welcome to the forum
    have you tried another browser to see if problem persists?
    is your current browser up to date and do you have the lastest driver for your network card?
    can you run btspeedtester and post the results to show your profile ans throughput speed
    If you like a post, or want to say thanks for a helpful answer, please click on the Ratings star on the left-hand side of the post.
    If someone answers your question correctly please let other members know by clicking on ’Mark as Accepted Solution’.

  • Airport Extreme confused by frequent login failures

    My AEBS seems to get hung up when a device tries to connect to get an IP address and fails the WPA handshake. Here's what the log says:
    Jan 01 11:11:52 Severity:5 Associated with station 00:12:f0:9e:71:a3
    Jan 01 11:11:53 Severity:1 WPA handshake failed with STA 00:12:f0:9e:71:a3 likely due to bad password from client
    Jan 01 11:11:53 Severity:5 Deauthenticating with station 00:12:f0:9e:71:a3 (reserved 2).
    Jan 01 11:11:53 Severity:5 Disassociated with station 00:12:f0:9e:71:a3
    Jan 01 11:11:53 Severity:5 Associated with station 00:12:f0:9e:71:a3
    Jan 01 11:11:54 Severity:1 WPA handshake failed with STA 00:12:f0:9e:71:a3 likely due to bad password from client
    Jan 01 11:11:54 Severity:5 Deauthenticating with station 00:12:f0:9e:71:a3 (reserved 2).
    This will go on indefinitely and while this is happening the network is unreliable. Restarting the base station fixes the problem.
    I can't figure out which device this is (may be a neighbor's!) How can I stop this barrage from freezing up my AEBS?

    I only have one base station. I have been unsuccessful in finding the culprit. I ended up changing the name of my network which has solved things for now. The previous name was "Airport", which a mobile device nearby may recognize as a network it can log on to. New name is a bit more creative

  • Connection failures between iMac and Macbook Pro

    I'm experiencing frequent connection failures over my wi-fi network between my iMac and Macbook Pro on an otherwise strong, stable network. I can connect both of my computers to the Airport Time Capsule that I use as my wireless router without a problem, but for some reason I get connection failures between the computers say, 50% of the time. Sometimes it works fine, others it just hangs before giving me an error message. Any suggestions?
    OS 10.9.3 on both machines
    Apple Airport Extreme Time Capsule
    iMac, 3.06GHz i3 (2010)
    Macbook Pro Retina, 2.4 GHz i7 (1st Gen)

    Hi jt in nyc,
    Some checks:
    Use Airport Utility on one of the machines
    - Click on the Time Capsule ( enter password if prompted )
    - Click on the Edit button
    Under Airport Utility -> Wireless -> Wireless Options
    Are the Time Capsule Wireless options set correctly?
    - Country -> you know this, change if it is not
    - 2.4GHz and 5GHz Channel set to Automatic
    - Click on Cancel button to return to the Airport Utility graphic diagram.
    - Click on the Time Capsule
    - Click on your machines listed beside wireless clients
    Are the machines reporting an excellent quality at different distances from the Time Machine?

  • Backup Failures to URL

    We are getting frequent backup failures, when backing to URL below is the message from event log
    An error occurred during data transfer operations with SqlServer, HRESULT:  0x80770003
    SQLVDI: Loc=CVDS::Close. Desc=Open devices!. ErrorCode=(0). Process=7396. Thread=7764. Client. Instance=MSSQLSERVER. VD=Global\https://storage/dbname.bak_SQLVDIMemoryName_0.
    BACKUP failed to complete the command BACKUP DATABASE dbname. Check the backup application log for detailed messages.
    Also below is the detailed message after turning on Traceflag 3051
    1/27/2015 9:10:28 AM:    ======== BackupToUrl Initiated =========
    1/27/2015 9:10:28 AM:    Inputs: Backup = True, PageBlob= True, URI = https://storage/dbname.bak, Acct= xxxxxx, Key= xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,
    FORMAT= True, Instance Name = MSSQLSERVER, DBName = dbname LogPath = G:\MSSQL\logs
    1/27/2015 9:10:28 AM:    Process Id: 7396
    1/27/2015 9:10:28 AM:    Time for Initialization = 31.2446 ms
    1/27/2015 9:10:28 AM:    BackupToUrl Client is getting configuration from SqlServr
    1/27/2015 9:10:28 AM:    Time for Handshake and VDI config = 15.6255 ms
    1/27/2015 9:10:28 AM:    Time for Get BlobRef = 32.0122 ms
    1/27/2015 9:10:29 AM:    lease id is 258757f9-8f8b-4110-bf56-7ab1a5caf67f
    1/27/2015 9:10:29 AM:    Time for blob create = 343.0298 ms
    1/27/2015 9:10:49 AM:    A timeout occurred on GetCommand, timeout length of 20000, will retry
    1/27/2015 9:11:09 AM:    A timeout occurred on GetCommand, timeout length of 20000, will retry
    1/27/2015 9:11:29 AM:    A timeout occurred on GetCommand, timeout length of 20000, will retry
    1/27/2015 9:11:49 AM:    A timeout occurred on GetCommand, timeout length of 20000, will retry
    1/27/2015 9:12:09 AM:    A timeout occurred on GetCommand, timeout length of 20000, will retry
    1/27/2015 9:12:29 AM:    A timeout occurred on GetCommand, timeout length of 20000, will retry
    1/27/2015 9:12:49 AM:    A timeout occurred on GetCommand, timeout length of 20000, will retry
    1/27/2015 9:13:09 AM:    A timeout occurred on GetCommand, timeout length of 20000, will retry
    1/27/2015 9:13:29 AM:    A timeout occurred on GetCommand, timeout length of 20000, will retry
    1/27/2015 9:13:29 AM:    Backup communication with SqlServr failed, hr = 0x80770003
    1/27/2015 9:13:29 AM:    A fatal error occurred during Engine Communication, exception information follows
    1/27/2015 9:13:29 AM:     Exception Info: An error occurred during data transfer operations with SqlServer, HRESULT:  0x80770003
    1/27/2015 9:13:29 AM:     Stack:    at Microsoft.SqlServer.VdiInterface.VDI.PerformPageDataTransfer(CloudPageBlob pageBlob, AccessCondition leaseCondition, Boolean forBackup)
       at BackupToUrl.Program.MainInternal(String[] args)

    We are getting frequent backup failures, when backing to URL below is the message from event log
    An error occurred during data transfer operations with SqlServer, HRESULT:  0x80770003
    SQLVDI: Loc=CVDS::Close. Desc=Open devices!. ErrorCode=(0). Process=7396. Thread=7764. Client. Instance=MSSQLSERVER.
    VD=Global\https://storage/dbname.bak_SQLVDIMemoryName_0.
    BACKUP failed to complete the command BACKUP DATABASE dbname. Check the backup application log for detailed
    messages.
    Also below is the detailed message after turning on Traceflag 3051
    1/27/2015 9:10:28 AM:    ======== BackupToUrl Initiated =========
    1/27/2015 9:10:28 AM:    Inputs: Backup = True, PageBlob= True, URI = https://storage/dbname.bak,
    Acct= xxxxxx, Key= xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,
    FORMAT= True, Instance Name = MSSQLSERVER, DBName = dbname LogPath = G:\MSSQL\logs
    1/27/2015 9:10:28 AM:    Process Id: 7396
    1/27/2015 9:10:28 AM:    Time for Initialization = 31.2446 ms
    1/27/2015 9:10:28 AM:    BackupToUrl Client is getting configuration from SqlServr
    1/27/2015 9:10:28 AM:    Time for Handshake and VDI config = 15.6255 ms
    1/27/2015 9:10:28 AM:    Time for Get BlobRef = 32.0122 ms
    1/27/2015 9:10:29 AM:    lease id is 258757f9-8f8b-4110-bf56-7ab1a5caf67f
    1/27/2015 9:10:29 AM:    Time for blob create = 343.0298 ms
    1/27/2015 9:10:49 AM:    A timeout occurred on GetCommand, timeout length of 20000, will retry
    1/27/2015 9:11:09 AM:    A timeout occurred on GetCommand, timeout length of 20000, will retry
    1/27/2015 9:11:29 AM:    A timeout occurred on GetCommand, timeout length of 20000, will retry
    1/27/2015 9:11:49 AM:    A timeout occurred on GetCommand, timeout length of 20000, will retry
    1/27/2015 9:12:09 AM:    A timeout occurred on GetCommand, timeout length of 20000, will retry
    1/27/2015 9:12:29 AM:    A timeout occurred on GetCommand, timeout length of 20000, will retry
    1/27/2015 9:12:49 AM:    A timeout occurred on GetCommand, timeout length of 20000, will retry
    1/27/2015 9:13:09 AM:    A timeout occurred on GetCommand, timeout length of 20000, will retry
    1/27/2015 9:13:29 AM:    A timeout occurred on GetCommand, timeout length of 20000, will retry
    1/27/2015 9:13:29 AM:    Backup communication with SqlServr failed, hr = 0x80770003
    1/27/2015 9:13:29 AM:    A fatal error occurred during Engine Communication, exception information
    follows
    1/27/2015 9:13:29 AM:     Exception Info: An error occurred during data transfer operations
    with SqlServer, HRESULT:  0x80770003
    1/27/2015 9:13:29 AM:     Stack:    at Microsoft.SqlServer.VdiInterface.VDI.PerformPageDataTransfer(CloudPageBlob
    pageBlob, AccessCondition leaseCondition, Boolean forBackup)
       at BackupToUrl.Program.MainInternal(String[] args)
    The size of the backup file is 115GB and I have verified there is no Proxy settings on the server, also
    its happening on multiple servers whereas the Tlog backup to the same container are successful all the time.

  • Power failures and the WRT54GL

    I have a WRT54GL. Yesterday, there was a series of three power failures where the power was restored within a split second. After this, I noticed that the Internet connection was down and that the power LED was flashing, which usually happens when the firmware is corrupt, as seen here: http://linksys.custhelp.com/cgi-bin/linksys.cfg/php/enduser/std_adp.php?p_faqid=3703. After I power cycled my router, the router worked fine. However, I am worried if the somewhat frequent power failures in my area could brick it. The electricity from Progress Energy, the local electrical utility in Cary, NC where I live, fails approximately once every one to three months, often enough that WRAL-TV did a story on the frequent power failures last year. However, all of the power problems have been failures and not power surges. Are my worries justified that Progress Energy could brick my router with its flaky power?
    Message Edited by jnv11 on 03-10-200709:14 PM

    Hmmm.. try to buy a UPS to avoid problems during power outtages...

  • New SSD Drive For OS - How Should I Config My Drive Scheme Now?

    I built a new PC in July based on ASUS P8P67 Pro Motherboard & i7-2600k 3.4 GHz processor w/ Cooler Master V8 heatsink. 16 GB RAM DDR3 Kingston HyperX. ASUS GTX 570 GeForce. Windows 7.
    In stock (not OC) configuration, this computer should cruise pretty well. Instead, it is unstable and performance unimpressive. Premiere Pro  CS 5.5 is sluggish with frequent timeline lockups and slow response to media export commands to AME and from with Premiere. After Effects too slow to be practical. Encore frequently crashes. I reboot several times in a typical editing day. I had similar problems with my older computer - that's why I built a new one.
    I tried to follow all the tips in this forum for hard drive setup, etc. Benchmarking shows decent times for exports. The problem is, the computer freezes for a while before it responds to the export command, so any speed is negated. The entire Premiere screen grays out, says "not responding" for a minute, then finally AME opens up. Another minute goes by before it starts doing its thing.
    I reinstalled CS and Windows several times. Uninstalled Matrox MX02 Mini and its software. Same problem.
    I suspected my C Drive could be bad because I frequently get failure messages. Utilities like Norton System Works and System Mechanic Pro almost daily said my registry had errors and drives needed realignment or defrag (OS drive usually)
    Having said all that, I went ahead and got an SSD. I now have the following:
    120 GB SSD Kingston HyperX - NEW replacement for 1 TB Barracuda SATA 7200.12
    2 TB SATA 3 Seagate Barracuda XT - Previews & Pagefile
    2 TB SATA 3 Seagate Barracuda XT - Capture Video/Audio
    1.5 TB SATA 7200.11 - empty
    1.5 TB SATA 7200.11 - Temp OS Drive (Windows 7 and Programs)
    2 TB G RAID External - Completed Projects
    2 TB NAS Drive - Exported Videos & Backups
    I have room for 5 internal hard drives and ample 850 Watts power
    I plan on a clean install of Windows rather than cloning the OS drive. What other considerations are important here? Hard drive settings? Why is my C drive partitioned with a reserve?
    According to previous tips, pagefile should go on fastest drive, previously one of the SATA 3s. Should it now now go on the SSD?
    Should I RAID I have not attempted to overclock until I can get it running smoothly.

    Percoplus,
    Suggest:
    1) Do a Windows format on your SSD before you load Windows; that way you can skip the strange partition that Win7 install wants to put on a "fresh" drive
    2) You will get much better CS5.5 performance with 2 RAID 0 arrays than with individual drives:
    - configure 2 2 drive arrays with you matching drives
    - remove Windows "indexing" from both arrays; set drive hardware policy to "enable write caching" and "turn off Windows write-cache buffer flushing" on both arrays; test both arrays for sustained reads and writes using a utility such as HD Tune Pro (set block size to 2MB) and make sure you are getting at least 100MB/sec performance from both arrays for reads and writes; if you are not, figure out whether you have a driver or drive issue and resolve
    3) format the arrays using GPT partitioning and 4k cluster size
    4) put projects/media on the fastest array; put scratch, media cache, media cache DB, and export to the other array
    Your system should NOT be sluggish, unless you have something wrong at this point!
    Test with PPBM5 and make sure results line up with similar cpu/drive systems.
    Regarding where to put Windows pagefile, it doesn't really matter too much since you have 16GB of RAM and Windows will not be needing the pagefile for much. I'd probably put it on the slower RAID array.
    Regards,
    Jim

  • Interview help

    Hello Everybody ,
       iam having interview for bi/bw support consultant and interview specs consists of data management techniques,improving and maintaining sap bi monitoring capabilities.solutions to support issues and understanding of BCC SAP solution and how its bw/bi configuration support the business.knowledge of wad.please send me the expected questions and answers though iam searching sdn using specs.
    Regards
    Priya

    Hi priya
    Here are some Q&A.
    Normally the production support activities include
    Scheduling
    R/3 Job Monitoring
    B/W Job Monitoring
    Taking corrective action for failed data loads.
    Working on some tickets with small changes in reports or in AWB objects.
    The activities in a typical Production Support would be as follows:
    1. Data Loading - could be using process chains or manual loads.
    2. Resolving urgent user issues - helpline activities
    3. Modifying BW reports as per the need of the user.
    4. Creating aggregates in Prod system
    5. Regression testing when version/patch upgrade is done.
    6. Creating adhoc hierarchies.
    we can perform the daily activities in Production
    1. Monitoring Data load failures thru RSMO
    2. Monitoring Process Chains Daily/weekly/monthly
    3. Perform Change run Hierarchy
    4. Check Aggr's Rollup
    To add to the above
    1)check data targets are ready for reporting,
    2) No failed or cancelled jobs in sm37 monitors and Bw Monitor.
    3) All requests are loaded for day, monthly and yearly also.
    4) Also to note down time taken for loading of critical info cubes which are used for reporting.
    5) Is there any break in any schedules from your process chains.
    As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want it for the interview perspective I would answer it in this way.
    a) Loads can be failed due to the invalid characters
    b) Can be because of the deadlock in the system
    c) Can be because of previous load failure , if the load is dependant on other loads
    d) Can be because of erroneous records
    e) Can be because of RFC connections
    These are some of the reasons for the load failures.
    Why there is frequent load failures during extractions? and how to analyse them?
    If these failures are related to Data, there might be data inconsistency in source system. Though we are handling properly in transfer rules. We can monitor these issues in T-code -> RSMO and PSA (failed records) and update.
    If we are talking about whole extraction process, there might be issues of work process scheduling and IDoc transfer to target system from source system. These issues can be re-initiated by canceling that specific data load and ( usually by changing Request color from Yellow - > Red in RSMO). and restart the extraction.
    What is the daily task we do in production support.How many times we will extract the data at what times.
    It depends... Data load timings are in the range of 30 mins to 8 hrs. This time is depends in number of records and kind of transfer rules you have provided. If transfer rules have some kind of round about transfer rules and updates rules has calculations for customized key figures... long times are expected..
    Usually You need to work on RSMO and see what records are failing.. and update from PSA.
    What are some of the frequent failures and errors?
    As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want it for the interview perspective I would answer it in this way.
    a) Loads can be failed due to the invalid characters
    b) Can be because of the deadlock in the system
    c) Can be because of previous load failure , if the load is dependant on other loads
    d) Can be because of erroneous records
    e) Can be because of RFC connections
    These are some of the reasons for the load failures.
    for Rfc connections:
    We use SM59 for creating RFC destinations
    Some questions
    1)     RFC connection lost.
    A) We can check out in the SM59 t-code
    RFC Des
    + R/3 conn
    CRD client (our r/3 client)
    double click..test connection in menu
    2) Invalid characters while loading.
    A) Change them in the PSA & load them.
    3) ALEREMOTE user is locked.
    A) Ask your Basis team to release the user. It is mostly ALEREMOTE.
    2) Password Changed
    3) Number of incorrect attempts to login into ALEREMOTE.
    4) USE SM12 t-code to find out are there any locks.
    4) Lower case letters not allowed.
    A) Uncheck the lower case letters check box under "general" tab in the info object.
    5) While loading the data i am getting messeage that 'Record
    the field mentioned in the errror message is not mapped to any infoboject in the transfer rule.
    6) object locked.
    A) It might be locked by some other process or a user. Also check for authorizations
    7) "Non-updated Idocs found in Source System".
    8) While loading master data, one of the datapackage has a red light error message:
    Master data/text of characteristic ZCUSTSAL already deleted .
    9) extraction job aborted in r3
    A) It might have got cancelled due to running for more than the expected time, or may be cancelled by R/3 users if it is hampering the performance.
    10) request couldnt be activated because there is another request in the psa with a smaller sid
    A)
    11) repeat of last delta not possible
    12) datasource not replicated
    A) Replicate the datasource from R/3 through source system in the AWB & assign it to the infosource and activate it again.
    13) datasource/transfer structure not active.
    A) Use the function module RS_TRANSTRU_ACTIVATE_ALL to activate it
    14) ODS activation error.
    A) ODS activation errors can occur mainly due to following reasons-
    1.Invalid characters (# like characters)
    2.Invalid data values for units/currencies etc
    3.Invalid values for data types of char & key figures.
    4.Error in generating SID values for some data.
    15. conversio routine error
    solution.check the data format in source
    16.OBJECT CANOOT BE ACTIVATED.or error when activating object
    check the consistency of the object.
    17.no data found.(in query)
    check the info provider wether data is there or not and delete unsucessful request.
    18.error generating or activating update rules.
    1. What are the extractor types?
    • Application Specific
    o BW Content FI, HR, CO, SAP CRM, LO Cockpit
    o Customer-Generated Extractors
    LIS, FI-SL, CO-PA
    • Cross Application (Generic Extractors)
    o DB View, InfoSet, Function Module
    2. What are the steps involved in LO Extraction?
    • The steps are:
    o RSA5 Select the DataSources
    o LBWE Maintain DataSources and Activate Extract Structures
    o LBWG Delete Setup Tables
    o 0LI*BW Setup tables
    o RSA3 Check extraction and the data in Setup tables
    o LBWQ Check the extraction queue
    o LBWF Log for LO Extract Structures
    o RSA7 BW Delta Queue Monitor
    3. How to create a connection with LIS InfoStructures?
    • LBW0 Connecting LIS InfoStructures to BW
    4. What is the difference between ODS and InfoCube and MultiProvider?
    • ODS: Provides granular data, allows overwrite and data is in transparent tables, ideal for drilldown and RRI.
    • CUBE: Follows the star schema, we can only append data, ideal for primary reporting.
    • MultiProvider: Does not have physical data. It allows to access data from different InfoProviders (Cube, ODS, InfoObject). It is also preferred for reporting.
    5. What are Start routines, Transfer routines and Update routines?
    • Start Routines: The start routine is run for each DataPackage after the data has been written to the PSA and before the transfer rules have been executed. It allows complex computations for a key figure or a characteristic. It has no return value. Its purpose is to execute preliminary calculations and to store them in global DataStructures. This structure or table can be accessed in the other routines. The entire DataPackage in the transfer structure format is used as a parameter for the routine.
    • Transfer / Update Routines: They are defined at the InfoObject level. It is like the Start Routine. It is independent of the DataSource. We can use this to define Global Data and Global Checks.
    6. What is the difference between start routine and update routine, when, how and why are they called?
    • Start routine can be used to access InfoPackage while update routines are used while updating the Data Targets.
    7. What is the table that is used in start routines?
    • Always the table structure will be the structure of an ODS or InfoCube. For example if it is an ODS then active table structure will be the table.
    8. Explain how you used Start routines in your project?
    • Start routines are used for mass processing of records. In start routine all the records of DataPackage is available for processing. So we can process all these records together in start routine. In one of scenario, we wanted to apply size % to the forecast data. For example if material M1 is forecasted to say 100 in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra Large 20%), we wanted to have 4 records against one single record that is coming in the info package. This is achieved in start routine.
    9. What are Return Tables?
    • When we want to return multiple records, instead of single value, we use the return table in the Update Routine. Example: If we have total telephone expense for a Cost Center, using a return table we can get expense per employee.
    10. How do start routine and return table synchronize with each other?
    • Return table is used to return the Value following the execution of start routine
    11. What is the difference between V1, V2 and V3 updates?
    • V1 Update: It is a Synchronous update. Here the Statistics update is carried out at the same time as the document update (in the application tables).
    • V2 Update: It is an Asynchronous update. Statistics update and the Document update take place as different tasks.
    o V1 & V2 don’t need scheduling.
    • Serialized V3 Update: The V3 collective update must be scheduled as a job (via LBWE). Here, document data is collected in the order it was created and transferred into the BW as a batch job. The transfer sequence may not be the same as the order in which the data was created in all scenarios. V3 update only processes the update data that is successfully processed with the V2 update.
    12. What is compression?
    • It is a process used to delete the Request IDs and this saves space.
    13. What is Rollup?
    • This is used to load new DataPackages (requests) into the InfoCube aggregates. If we have not performed a rollup then the new InfoCube data will not be available while reporting on the aggregate.
    14. What is table partitioning and what are the benefits of partitioning in an InfoCube?
    • It is the method of dividing a table which would enable a quick reference. SAP uses fact file partitioning to improve performance. We can partition only at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as data is stored in the relevant partitions. Also table maintenance becomes easier. Oracle, Informix, IBM DB2/390 supports table partitioning while SAP DB, Microsoft SQL Server, IBM DB2/400 do not support table portioning.
    15. How many extra partitions are created and why?
    • Two partitions are created for date before the begin date and after the end date.
    16. What are the options available in transfer rule?
    • InfoObject
    • Constant
    • Routine
    • Formula
    17. How would you optimize the dimensions?
    • We should define as many dimensions as possible and we have to take care that no single dimension crosses more than 20% of the fact table size.
    18. What are Conversion Routines for units and currencies in the update rule?
    • Using this option we can write ABAP code for Units / Currencies conversion. If we enable this flag then unit of Key Figure appears in the ABAP code as an additional parameter. For example, we can convert units in Pounds to Kilos.
    19. Can an InfoObject be an InfoProvider, how and why?
    • Yes, when we want to report on Characteristics or Master Data. We have to right click on the InfoArea and select “Insert characteristic as data target”. For example, we can make 0CUSTOMER as an InfoProvider and report on it.
    20. What is Open Hub Service?
    • The Open Hub Service enables us to distribute data from an SAP BW system into external Data Marts, analytical applications, and other applications. We can ensure controlled distribution using several systems. The central object for exporting data is the InfoSpoke. We can define the source and the target object for the data. BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system.
    21. How do you transform Open Hub Data?
    • Using BADI we can transform Open Hub Data according to the destination requirement.
    22. What is ODS?
    • Operational DataSource is used for detailed storage of data. We can overwrite data in the ODS. The data is stored in transparent tables.
    23. What are BW Statistics and what is its use?
    • They are group of Business Content InfoCubes which are used to measure performance for Query and Load Monitoring. It also shows the usage of aggregates, OLAP and Warehouse management.
    24. What are the steps to extract data from R/3?
    • Replicate DataSources
    • Assign InfoSources
    • Maintain Communication Structure and Transfer rules
    • Create and InfoPackage
    • Load Data
    25. What are the delta options available when you load from flat file?
    • The 3 options for Delta Management with Flat Files:
    o Full Upload
    o New Status for Changed records (ODS Object only)
    o Additive Delta (ODS Object & InfoCube)
    SAP BW Interview Questions 2
    1) What is process chain? How many types are there? How many we use in real time scenario? Can we define interdependent processes with tasks like data loading, cube compression, index maintenance, master data & ods activation in the best possible performance & data integrity.
    2) What is data integrityand how can we achieve this?
    3) What is index maintenance and what is the purpose to use this in real time?
    4) When and why use infocube compression in real time?
    5) What is mean by data modelling and what will the consultant do in data modelling?
    6) How can enhance business content and what for purpose we enhance business content (becausing we can activate business content)
    7) What is fine-tuning and how many types are there and what for purpose we done tuning in real time. tuning can only be done for infocube partitions and creating aggregates or any other?
    8) What is mean by multiprovider and what purpose we use multiprovider?
    9) What is scheduled and monitored data loads and for what purpose?
    Ans # 1:
    Process chains exists in Admin Work Bench. Using these we can automate ETTL processes. These allows BW guys to schedule all activities and monitor (T Code: RSPC).
    PROCESS CHAIN - Before defining PROCESS CHAIN, let us define PROCESS in any given process chain. Is a procedure either with in the SAP or external to it with a start and end. This process runs in the background.
    PROCESS CHAIN is set of such processes that are linked together in a chain. In other words each process is dependent on the previous process and dependencies are clearly defined in the process chain.
    This is normally done in order to automate a job or task that has to execute more than one process in order to complete the job or task.
    1. Check the Source System for that particular PC.
    2. Select the request ID (it will be in Header Tab) of PC
    3. Go to SM37 of Source System.
    4. Double Click on the Job.
    5. You will navigate to a screen
    6. In that Click "Job Details" button
    7. A small Pop-up Window comes
    8. In the Pop-up screen, take a note of
    a) Executing Server
    b) WP Number/PID
    9. Open a new SM37 (/OSM37) command
    10. In the Click on "Application Servers" button
    11. You can see different Application Servers.
    11. Goto Executing server, and Double Click (Point 8 (a))
    12. Goto PID (Point 8 (b))
    13. On the left most you can see a check box
    14. "Check" the check Box
    15. On the Menu Bar.. You can see "Process"
    16. In the "process" you have the Option "Cancel with Core"
    17. Click on that option. * -- Ramkumar K
    Ans # 2:
    Data Integrity is about eliminating duplicate entries in the database and achieve normalization.
    Ans # 4:
    InfoCube compression creates new cube by eliminating duplicates. Compressed infocubes require less storage space and are faster for retrieval of information. Here the catch is .. Once you compress, you can't alter the InfoCube. You are safe as long as you don't have any error in modeling.
    This compression can be done through Process Chain and also manually.
    Tips by: Anand
    Ans#3
    Indexing is a process where the data is stored by indexing it. Eg: A phone book... When we write somebodys number we write it as Prasads number would be in "P" and Rajesh's number would be in "R"... The phone book process is indexing.. similarly the storing of data by creating indexes is called indexing.
    Ans#5
    Datamodeling is a process where you collect the facts..the attributes associated to facts.. navigation atributes etc.. and after you collect all these you need to decide which one you ill be using. This process of collection is done by interviewing the end users, the power users, the share holders etc.. it is generally done by the Team Lead, Project Manager or sometimes a Sr. Consultant (4-5 yrs of exp) So if you are new you dont have to worry about it....But do remember that it is a imp aspect of any datawarehousing soln.. so make sure that you have read datamodeling before attending any interview or even starting to work....
    Ans#6
    We can enhance the Business Content bby adding fields to it. Since BC is delivered by SAP Inc it may not contain all the infoobjects, infocubes etc that you want to use according to your company's data model... eg: you have a customer infocube(In BC) but your company uses a attribute for say..apt number... then instead of constructing the whole infocube you can add the above field to the existing BC infocube and get going...
    Ans#7
    Tuning is the most imp process in BW..Tuning is done the increase efficiency.... that means lowering time for loading data in cube.. lowering time for accessing a query.. lowering time for doing a drill down etc.. fine tuning=lowering time(for everything possible)...tuning can be done by many things not only by partitions and aggregates there are various things you can do... for eg: compression, etc..
    Ans#8
    Multiprovider can combine various infoproviders for reporting purposes.. like you can combine 4-5 infocubes or 2-3 infocubes and 2-3 ODS or IC, ODS and Master data.. etc.. you can refer to help.sap.com for more info...
    Ans#9
    Scheduled data load means you have scheduled the loading of data for some particular date and time you can do it in scheduler tab if infoobject... and monitored means you are monitoring that particular data load or some other loads by using transaction RSMON.
    1.Procedure for repeat delta?
    You need to make the request status to Red in monitor screen and then delete it from ODS/Cube. Then when you open infopackage again, system will prompt you for repeat delta.
    also.....
    Goto RSA7->F2->Update Mode--->Delta Repetation
    Delta repeation is done based on type of upload you are carrying on.
    1. if you are loading masterdata then most of the time you will change the QM status to red and then repeat the delta for the repeat of delta. the delta is allowed only if you make the changes.
    and some times you need to do the RnD if the repeat of delta is not allowed even after the qm status id made to red. here you have to change the QM status to red.
    If this is not the case, the source system and therefore also the extractor, have not yet received any information regarding the last delta and you must set the request to GREEN in the monitor using a QM action.
    The system then requests a delta again since the last delta request has not yet occurred for the extractor.
    Afterwards, you must reset the old request that you previously set to GREEN to RED since it was incorrect and it would otherwise be requested as a data target by an ODS.
    Caution: If the termianted request was a REPEAT request itself, always set this to RED so that the system tries to carry out a repeat again.
    To determine whether a delta or a repeat are to be requested, the system ONLY uses the status of the monitor.
    It is irrelevant whether the request is updated in a data target somewhere.
    When activating requests in an ODS, the system checks delta repeat requests for completeness and the correct sequence.
    Each green delta/repeat request in the monitor that came from the same DataSource/source system combination must be updated in the ODS before activation, which means that in this case, you must set them back to RED in the monitor using a QM action when using the solution described above.
    If the source of the data is a DataMart, it is not just the DELTARNR field that is relevant (in the roosprmsc table in the system in which the source DataMart is, which is usually your BW system since it is a Myself extraction in this case), rather the status of the request tabstrip control is relevant as well.
    Therefore, after the last delta request has terminated, go to the administration of your data source and check whether the DataMart indicator is set for the request that you wanted to update last.
    If this is NOT the case, you must NOT request a repeat since the system would also retransfer the data of the last delta but one.
    This means, you must NOT start a delta InfoPackage which then would request a repeat because the monitor is still RED. For information about how to correct this problem, refer to the following section.
    For more information about this, see also Note 873401.
    Proceed as follows:
    Delete the rest of this request from ALL updated data targets, set the terminated request to GREEN IN THE MONITOR and request a new DELTA.
    Only if the DataMart indicator is set does the system carry out a repeat correctly and transfers only this data again.
    This means, that only in this case can you leave the monitor status as it is and restart the delta InfoPackage. Then this creates a repeat request
    In addition, you can generally also reset the DATAMART indicator and then work using a delta request after you have set the incorrect request to GREEN in the monitor.
    Simply start the delta InfoPackage after you have reset the DATAMART indicator AND after you have set the last request that was terminated to GREEN in the monitor.
    After the delta request has been carried out successfully, remember to reset the old incorrect request to RED since otherwise the problems mentioned above will occur when you activate the data in a target ODS.
    What is process chain and how you used it?
    A) Process chains are tool available in BW for Automation of upload of master data and transaction data while taking care of dependency between each processes.
    B) In one of our scenario we wanted to upload wholesale price infoobject which will have wholesale price for all the material. Then we wanted to load transaction data. While loading transaction data to populate wholesale price, there was a look up in the update rule on this InfoObject masterdata table. This dependency of first uploading masterdata and then uploading transaction data was done through the process chain.
    What is process chain and how you used it?
    A) We have used process chains to automate the delta loading process. Once you are finished with your design and testing you can automate the processes listed in RSPC. I have a real time example in the attachment.
    1. What is process chain and how you used it?
    Process chains are tool available in BW for Automation of upload of master data and transaction data while taking care of dependency between each processes.
    2. What is transaction for creating Process Chains ?
    RSPC .
    3. Explain Colector Process ?
    Collector processes are used to manage multiple predecessor
    processes that feed into the same subsequent process. The collector
    processes available for BW are:
    AND :
    All of the direct predecessor processes must raise an event in order for subsequent processes to be executed
    OR :
    A least one predecessor process must send an event The first predecessor process that sends an event triggers the subsequent process
    Any additional predecessor processes that send an event will again trigger
    subsequent process (Only if the chain is planned as “periodic”)
    EXOR : Exclusive “OR”
    Similar to regular “OR”, but there is only ONE execution of the successor
    processes, even if several predecessor processes raise an event
    4. What are application Process ?
    Application processes represent BW activities that are typically
    performed as part of BW operations.
    Examples include:
    Data load
    Attribute/Hierarchy Change run
    Aggregate rollup
    Reporting Agent Settings
    5. Tell some facts about process Chains
    Process chains are transportable Button for writing to a change request when
    maintaining a process chain in RSPC
    Process chains available in the transport connection wizard (administrator workbench)
    If a process “dumps”, it is treated in the same manner as a failed process
    Graphical display of Process Chain Maintenance requires the 620 SAPGUI and SAP BW 3.0B Frontend GUI
    A special control background job runs to facilitate the execution of the of the other batch jobs of the process chain
    Note your BTC process distribution, and make sure that an extra BTC process is available so the supporting control job can run immediately
    6. What happens when chain is activated ?
    When a chain gets activated It will be copied into active version The processes will be planned in batch as program RSPROCESS with type and variant given as parameters with job name BI_PROCESS_<TYPE> waiting for event, except the trigger The trigger is planned as specified in its variant, if “start via meta-chain” it is not planned to batch
    7. Steps in process chains ?
    Go to transaction code-> RSPC
    Follow the Basic Flow of Process chain..
    1. Start chain
    2. Delete BasicCube indexes
    3. Load data from the source system into the PSA
    4. Load data from the PSA into the ODS object
    5. Activate data in the ODS object
    6. Load data from the ODS object in the BasicCube
    7. Create indexes after loading for the BasicCube
    Also check out theese links:
    Help on "Remedy Tickets resolution"
    production support issues
    /people/siegfried.szameitat/blog/2005/07/28/data-load-errors--basic-checks
    https://forums.sdn.sap.com/click.jspa?searchID=678788&messageID=1842076
    Production Support
    Production support issues
    Business Intelligence Old Forum (Read Only Archive)
    http://help.sap.com/saphelp_nw2004s/helpdata/en/8f/c08b3baaa59649e10000000a11402f/frameset.htm
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/8da0cd90-0201-0010-2d9a-abab69f10045
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/19683495-0501-0010-4381-b31db6ece1e9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/36693695-0501-0010-698a-a015c6aac9e1
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/9936e790-0201-0010-f185-89d0377639db
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3507aa90-0201-0010-6891-d7df8c4722f7
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/263de690-0201-0010-bc9f-b65b3e7ba11c
    /people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
    /people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
    For common data load errors check this link:
    /people/siegfried.szameitat/blog/2005/07/28/data-load-errors--basic-checks
    Re: In production Support , how i can acquire the knowledge
    Re: How to resolve tickets  its urgent
    Re: production support issues
    production support
    check it out
    /thread/152949 [original link is broken]
    production support issues
    production support issues
    Production Support Issues
    /thread/153963 [original link is broken]
    Issue log on SAP- BW Production support
    issues in production support
    Production support issues
    /thread/155620 [original link is broken]
    Production support issues
    Production support issues
    production errors
    Re: HI,wht r de errors in Support in BW
    Production Support
    /message/3267132#3267132 [original link is broken]
    Assign points if useful
    Regards,
    Hari Reddy

  • Is XEN aware of my domUs?

    I'm relatively new to OracleVM. (Unfortunately) I have frequent power failures at my home and I would like to configure my domUs to auto start after power failure. I've been unable to do this thus far. I've created all my domUs via the Oracle VM Manager.
    This system is using OracleVM 2.2.
    dom0 Info
    [root@ovm ~]# uname -a
    Linux ovm 2.6.18-128.2.1.4.25.el5xen #1 SMP Tue Mar 23 12:43:27 EDT 2010 i686 i686 i386 GNU/LinuxI tried the method in this thread: {thread:id=1021912}, but it didn't work.
    domUs
    [root@ovm ~]# ls -l /OVS/running_pool/
    total 0
    drwxrwxrwx 2 root root 3896 Oct  8 09:57 30_ora112standby
    drwxrwxrwx 2 root root 3896 Oct  8 09:55 32_ora112primary
    [root@ovm ~]# ls -l /etc/xen/auto/
    total 0
    lrwxrwxrwx 1 root root 41 Sep 26 10:16 30_ora112standby -> /OVS/running_pool/30_ora112standby/vm.cfg
    lrwxrwxrwx 1 root root 41 Sep 26 10:17 32_ora112primary -> /OVS/running_pool/32_ora112primary/vm.cfgSo after looking through the documentation I figured I'd use the xm list command to list the domUs after server reboot. However, I didn't get what I expected:
    [root@ovm ~]# xm list
    Name                                        ID   Mem VCPUs      State   Time(s)
    Domain-0                                     0   563     4     r-----   1105.0I expected 30_ora112standby, and 32_ora112primary to be listed in a shutdown state.
    When I start the domUs via the Oracle VM Manager, and run xm list, they are listed.
    So as I said before I am trying to get my domUs to start after power failure. However as I fall down the rabbit hole it seems as if xen may not be fully aware of my VMs.
    Again, I'm completely new to the Oracle VM environment so I may be missing something extremely simple. I've tried to pour through the Oracle VM 2.2 documentation but I can't seem to find what I'm looking for.
    Thanks for the help and if you need anything let me know.
    :)

    Hi,
    you are on the rigt way. But there seemed to be a timing problem with service starting from ovs_agent and xen. When xen tryes to start the configured vms at boot time, the OVS directory system is not yet online.
    Look at this post: http://run.tournament.org.il/oracle-vm-post-install-check-list/
    It did it for me.
    have a nice day
    Michael

Maybe you are looking for

  • Help...How to I get files from my external hard drive into my Library!

    I ran out of space on my PC's hard drive, so I bought an external drive, and copied everything over there. I deleted the files on my PC's hard drive, and changed the setting in iTunes to look for files on the external drive. The trouble is, there are

  • Path palette: What's wrong with "path share" and "path exclude"?

    Hi everybody. First of all: Sorry for my english, but I hope someone can understanding me. Well, I want to work with Fireworks in the future because I think that this programm keep in some situation more efficience as Photoshop... So I was trying to

  • User Name limitations in SQL Developer

    I am using SQL Developer from past 1 year. I am facing some issues with some of my connections which are with Numaric User Names ( Ex. User Name : 12312). In older versions it was giving error ' Invalid User name / Passoword' And in Version3 its givi

  • Using the Netbeans GUI within a Project with existing Ant Script

    I cannot seem to figure out how to use the Netbeans GUI in my Project with existing Ant Script. Is this possible? I have no problem creating GUI interfaces in a java Application, but attempting to make a new JFrame Form gives the error: package org.j

  • Accidently changed the folder to open with the srong application

    Hi there, i've just connected my mac formatted iPod to a windows pc and have been following the instructions to get my musicoff the iPod on to my pc. But in the process i have accidently changed the folder to open with quick time and i can't remember