Issue with brconnect jobs
Hi All,
we have recently Upgraded our Oracle from 9i to 10.2.0.4 and our kernel 640 also to 347 .
Since the Upgrade my DB13 jobs are not running fine.
I am getting error :
brconnect: error while loading shared libraries: libclntsh.so.10.1: cannot open shared object file:No such file or directory
I checked file libclntsh.so.10.1 is available at /oracle/PD0/102_64/lib but I do not why I am getting this error.
Can you please suggest to resolve this issue.
Regards,
Shivam Mittal
Shivam Mittal wrote:
Hi Orkun,
>
> We are using Linux OS, I check we have LD_LIBRARY_PATH set in our environment which is pointing to this environment.
>
> LD_LIBRARY_PATH=/usr/sap/PD0/SYS/exe/run:/usr/sap/PD0/SYS/exe/runU:/oracle/PD0/102_64/lib
>
> Please suggest do we have to set any other variable also to resolve the issue.
>
> Shivam
Hi Shivam,
Are you facing with this error while executing "brconnect" on ora<sid> user?
What about the permission on the libclntsh.so.10.1?
Do you have any problem on that file, after you executed "relink all"?
Best regards,
Orkun Gedik
Edited by: Orkun Gedik on Jun 15, 2011 2:46 PM
Similar Messages
-
Issue with background job--taking more time
Hi,
We have a custom program which runs as the background job. It runs every 2 hours.
Itu2019s taking more time than expected on ECC6 SR2 & SR3 on Oracle 10.2.0.4. We found that it taking more time while executing native SQL on DBA_EXTENTS. When we tried to fetch less number of records from DBA_EXTENTS, it works fine.
But we need the program to fetch all the records.
But it works fine on ECC5 on 10.2.0.2 & 10.2.0.4.
Here is the SQL statement:
EXEC SQL PERFORMING SAP_GET_EXT_PERF.
SELECT OWNER, SEGMENT_NAME, PARTITION_NAME,
SEGMENT_TYPE, TABLESPACE_NAME,
EXTENT_ID, FILE_ID, BLOCK_ID, BYTES
FROM SYS.DBA_EXTENTS
WHERE OWNER LIKE 'SAP%'
INTO
: EXTENTS_TBL-OWNER, :EXTENTS_TBL-SEGMENT_NAME,
:EXTENTS_TBL-PARTITION_NAME,
:EXTENTS_TBL-SEGMENT_TYPE , :EXTENTS_TBL-TABLESPACE_NAME,
:EXTENTS_TBL-EXTENT_ID, :EXTENTS_TBL-FILE_ID,
:EXTENTS_TBL-BLOCK_ID, :EXTENTS_TBL-BYTES
ENDEXEC.
Can somebody suggest what has to be done?
Has something changed in SAP7 (wrt background job etc) or do we need to fine tune the SQL statement?
Regards,
VivdhaHi,
there was an issue with LMT's but that was fixed in 10.2.0.4
besides missing system statistics:
But WHY do you collect every 2 hours this information? The dba_extents view is based on really heavy used system tables.
Normally , you would start queries of this type against dba_extents ie. to identify corrupt blocks and such:
SELECT owner , segment_name , segment_type
FROM dba_extents
WHERE file_id = &AFN
AND &BLOCKNO BETWEEN block_id AND block_id + blocks -1
Not sure what you want to achieve with it.
There are monitoring tools (OEM ?) around that may cover your needs.
Bye
yk -
Virsa CC 5.1: Issue with Background Job
Dear All,
I almost finished configuration of our new Compliance Calibrator dashboard (Java stack of NW'04). But, unfortunately, have an issue with SoD analysis now.
Using SAP Best Practice recommendations, we have uploaded all functions, business processes and risks into CC database, and then successfully generated new rules (there are about 190 active ones in our Analysis Engine).
I also configured JCo to R/3 backend and was able to extract the full list of our users, roles and profiles. But somehow background analysis job fails.
In job history table I see the following message: "Error while executing the Job:null", and in job log there is an entry, saying that "Daemon idle time longer than RFC time out, terminating daemon 0". But it's quite strange, as we use default values: RFC Time Out = 30 min, and daemons are invoked every 60 seconds.
Please, advise whether you had similar issues in your SAP environment and, if yes, how you resolved them.
Thanks,
LazizHi Laziz
I am now just doing the first part of CC. May I know your details in how to configured Jco To R/3 Backend? Do you need to create any SM59 for connection T in R/3? If yes, I am lacking of the details to do that if you could help? Thank you.
Regards
Florence -
SAP BW structure/table name change issue with BODS Jobs promotion
Dear All, One of my client has issue with promotion of BODS jobs from one environment to another. They move SAP BW projects/tables along with BODS jobs (separately) from DEV to QA to Prod.
In SAP-BW the structures and tables get a different postfix when they are transported to the next environment. The promotion in SAP-BW (transport) is an automated process and it is the BW deployment mechanism that causes the postfixes to change. As a result with the transport from Dev to QA in SAP-BW we end up with table name changes (be they only suffixes), but this does mean that when we deploy our Data Services jobs we immediately have to change them for the target environments.
Please let me know if someone has deployed some solution for this.
ThanksThis is an issue with SAP BW promotion process. SAP BASIS team should not turn on setting to suffix systemid with the table names during promotion.
Thanks, -
Hi all,
Got an issue with dat aloads in BI 7.
We have transaction data loads scheduled in process chains, and during the execution of Infopackage in the processchain, the connection to ECC is loast and also the BI server went down.
When the servers are recovered, the status of the infopackages running are in Yellow.
Now i have two questions.
1. What happens if the infopackage status is yellow i.e half of the records are transferred to BI and half are left out in ECC.
Executing the LUW's in TRFC will solve the issue?
2. What happens if the request becomes RED, with half of the records transferred?
How can we recover the next half of the records?Hi,
See, that is not the case....if your connectivity goes in between the data load is in progress...No matter whether it is delta load of full load....u always have the option fo deleting the red request and going for a reload....It is as simple as that....
If it is delta load...it will propmt you a pop up saying that the last delta was incomplete...do you want to repeat the delta load....
Once you do the Repeat Delta...entire data will be reloaded again....Also once u do Repeat delta...From the Delta Queue entire records which has failed during the previous load and all the new records created till that time will be loaded in BW...
Also for you infomation...there is nothing like Repeat Delta Queue and Normal Delta Queue...There exist only one Delta Queue...
Hope it is clear...
Rgds,
Amit Kr.
Edited by: Amit Kr on Jul 26, 2010 1:04 PM
Edited by: Amit Kr on Jul 26, 2010 1:05 PM -
Hello Team,
After creating a new protection group, the jobs were frozen and was never kicked off.
What could be the reason?
Regards,
Suman RoutHi,
Please see the below blog which may assist with troubleshooting scheduled jobs.
Blog:
http://blogs.technet.com/b/dpm/archive/2014/10/08/how-to-troubleshoot-scheduled-backup-job-failures-in-dpm-2012.aspx
Pervious fourm post:
https://social.technet.microsoft.com/Forums/en-US/ed65d3e0-c7d7-488b-ba34-4a2083522bae/dpm-2010-scheduled-jobs-disappear-rather-than-run?forum=dataprotectionmanager
Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually
answer your question. This can be beneficial to other community members reading the thread. Regards, Dwayne Jackson II. [MSFT] This posting is provided "AS IS" with no warranties, and confers no rights." -
Issues with scheduling job in sm36 for a standard report...
Hi,
After creating a variant for a program, if I try to execute SM36 -> Define step and then, I select ABAP program, the variant name associated with it. Now what do I need to do to schedule job for that report.Hi,
After giving the program and variant..
Press the start condition button..
Press Immediate to run immediately..
Or
Choose the date and time you want to run this job..
After that press save.
Then press save in sm36 for the job..THis will release the job...
Thanks,
Naren -
Issue with Archiving_deletion job
Team
I am facing an issues due to archiving Deletion job canceleld halfway.
We ran the archiving for SD_VBAK and it got completed successfully. It generated three archive file sessions. Now I am running the deletion job. One Archive deletion session was successfully completed. The other two jobs got canceled due to weekly system restart. I tried scheduling new deletion via SARA and it runs successfully in test Mode. When I un-check the test mode and run it, the jobs are getting canceled throwing the following error.
"Step 001 started (program S3VBAKDL, variant ZZCVB
Archive file 000144-002SD_VBAK is being verified
Archive file 000144-002SD_VBAK is being processed
New fill for archive for which fill was not completed
Text 7600293102 ID ZH06 language EN not found
Job canceled after system exception ERROR_MESSAGE"
Can somebody throw some light on how to fix this issue. ZHO6 is a document type.
I unable to proceed further for other objects due to this issue.
Please help.
Regards
MathiDear mathi.
Kindly check the error log file. Please let us Know .and also check the in the Statics tab in sara transaction . check the Job number status (wheather it is green or Yellow)
Thanks and Warm Regards
Basavaraj Evani -
Issue with getting the entire spooled job into the pdf document
Hi All,
My scheduled job runs fine and completes successfully but it appears to have some issues with getting the entire spooled job into the pdf document which it sends to a an end user.Has anyone encountered it before.Your kind assistance would be appreciated.
Thank you.
Regards,
JohnHi,
Please read below link for your query:-
http://help.sap.com/saphelp_nw04/helpdata/EN/85/54c73cee4fb55be10000000a114084/content.htm
Regards,
Anil -
Sharepoint 2010 Content Deployemnt Job issue with duplicate fields in User information List
Hi friends,
I am facing below issue with the content deployment job.
It was working earlier. But now since from couple of days all the content deployment jobs in production environment are failing with below error.
Field name already exists. The name used for this field is already used by another field in the list. Select another name and try again.
ObjectName="User Information List".
When I check the fields in User information list in targeted site, I found couple of columns are dupicate like "ask me about",first name","Last name" etc.
Do i Need to drop target site collection or recreate with fresh content deployment job.
Please suggest.
Please help .
Regards
SubratHi,
According to your post, my understanding is that you got duplicate field error.
Based on the error message, you can try to use the following code sample to remove duplicate records, and check whether it works:
http://social.msdn.microsoft.com/Forums/en-US/sharepointgeneralprevious/thread/41ee04bd-91fb-4bf9-932a-bac42c56c357
Here is a similar issue, you can also use the ‘RemoveDuplicateColumn64’ provided:
http://sharepointsurfer.wordpress.com/2012/04/27/how-to-fix-publishing-site-content-deployment-error-duplicate-first-name-column/
What’s more, as you had said, you can recreate a site with a fresh deployment job.
Thanks & Regards,
Jason
Jason Guo
TechNet Community Support -
Issue with Compliance Calibrator 5.2 SP9 Background Jobs
Hello,
I'm having an issue with Compliance Calibrator 5.2 SP9 where If I run a role analysis as a background job that has the same parameters as a previously run role analysis background job, the second job that is run will display a failure message. It does not appear to matter if the similar background jobs were run by the same individual or separate individuals. As long as the job that was previously run is still in the background job history, than any job with the same parameters run by a user will fail.
Is this normal operation for CC?
Is there a configuration change that could allow a job to be rerun in the background multiple times?
Is there a fix for this issue in a later support pack or with upgrade to 5.3?
Thanks for the help it's much appreciated,To better clarify what is occurring, the 1st job will run and complete successfully and return/display the appropriate results correctly. The 2nd job will than be subsequently kicked off and finish same as with the previous job except when you open the background results no data is displayed and the message at the bottom reads: Failed to display result. To make more sense of what Iu2019m doing, these are the logical steps Iu2019m following:
1. Select Role Level analysis
2. Enter parameters for analysis
3. Schedule background job to run immediately
4. View background job results (successful job and correct results)
5. Select Role Level analysis (with same or any other user)
6. Enter same parameters as step 2 for analysis
7. Schedule background job to run immediately
8. View background job results (successful job, but the error message: u2018Failed to display resultu2019, instead of seeing the CC reports)
I believe the error is somewhere in the running of a job with the same parameters (Same Role and same Report Type). If I delete the previous jobs from the background history that have the parameters Iu2019m using and try the analysis again, a third time, with the same parameters as before, it will run successfully and display the correct results.
Is this normal and acceptable operation for CC5.2 SP9?
Is there a configuration change that would allow a job to be run in the background multiple times with the correct CC results?
Is there a fix for this issue in a later support pack or with upgrade to 5.3? -
Issues with simple cron job to query database
Hi all,
This is really a mix, of both a database issue and a Linux issue, so I thought I'd post it here to get the most visibility. I have been looking into this, and also had a couple other DBA's at my company help, and we are not able to figure it out, so I thought I'd post it here. Basically, I have a simple shell script that is just doing a sqlplus connection to the database and querying V$INSTANCE. When I run the script from the command line, it runs fine, but for the life of me, I can't get it to run from the crontab.
I'm running Oracle 10.1.0.3.0 on a RedHat ES Linux release 3 (Red Hat Enterprise Linux AS release 3 (Taroon Update 5) server. This is all local on the server - the database is on this server, and this is also where I'm running the script.
I've made the script as simple as possible, like this:
#!/bin/sh
sqlplus / as sysdba <<EOF
Select * from v\$instance;
EOF
When I run this script from the command line, it is running fine. But then I put it in a crontab entry like this:
00 * * * * /home/oracle/scripts/cron_test.sh > /home/oracle/scripts/cron_test.log 2>&1
And I am seeing this in the logfile:
Connected to an idle instance.
SQL> SQL> Select * from v$instance
ERROR at line 1:
ORA-01034: ORACLE not available
SQL> SQL> Disconnected
At first glance, it seems like a simple issue with environment variables not being set properly, but I have tested this to exhaustion. I've put in all of the environment variables into the shell script to duplicate what I have on the command line, but it still won't work. I've checked on just about anything else I could think of - looked at the tnsnames and listener config, looked at the profile from the crontab in /etc/profile, nothing works...
And on any other server that I test this on, it all works fine and as expected. So I'm trying to figure out why I'm only having this issue on this one particular server?? Some kind of server config or setting that could be causing it?
I appreciate any comments or tips!!
Thanks,
BradDone. Here is the output from the command line:
AGENT_HOME=/u01/app/oracle/product/10.2.0/agent10g
_=/bin/env
G_BROKEN_FILENAMES=1
HISTSIZE=1000
HOME=/home/oracle
HOSTNAME=rlldb.celgene.com
INPUTRC=/etc/inputrc
JAVA_HOME=/usr/java/j2re1.4.2_08/
LANG=en_US.UTF-8
LESSOPEN=|/usr/bin/lesspipe.sh %s
LL_ALL=en_US
LOGNAME=oracle
LS_COLORS=no=00:fi=00:di=00;34:ln=00;36:pi=40;33:so=00;35:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:ex=00;32:*.cmd=00;32:*.exe=00;32:*.com=00;32:*.btm=00;32:*.bat=00;32:*.sh=00;32:*.csh=00;32:*.tar=00;31:*.tgz=00;31:*.arj=00;31:*.taz=00;31:*.lzh=00;31:*.zip=00;31:*.z=00;31:*.Z=00;31:*.gz=00;31:*.bz2=00;31:*.bz=00;31:*.tz=00;31:*.rpm=00;31:*.cpio=00;31:*.jpg=00;35:*.gif=00;35:*.bmp=00;35:*.xbm=00;35:*.xpm=00;35:*.png=00;35:*.tif=00;35:
MAIL=/var/spool/mail/oracle
MANPATH=:/opt/IBM/director/man
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/10.1.0/Db_1
ORACLE_SID=rlldb
PATH=/u01/app/oracle/product/10.1.0/Db_1/bin:/usr/java/j2re1.4.2_08//bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/opt/IBM/director/bin:/home/oracle/bin:/sbin://u01/app/oracle/product/10.1.0/Db_1/OPatch:/u01/app/oracle/dbvisit/dbvisit:/u01/app/oracle/product/10.1.0/Db_1/bin
PWD=/home/oracle/scripts
SHELL=/bin/bash
SHLVL=2
SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass
SSH_CLIENT=10.20.241.84 2988 22
SSH_CONNECTION=10.20.241.84 2988 10.20.1.29 22
SSH_TTY=/dev/pts/4
TERM=xterm
USER=oracle
And the output from the cron run:
HOME=/home/oracle
LOGNAME=oracle
ORACLE_HOME=/u01/app/oracle/product/10.1.0/Db_1
ORACLE_SID=rlldb
PATH=/usr/bin:/bin:/u01/app/oracle/product/10.1.0/Db_1/bin
PWD=/home/oracle
SHELL=/bin/sh
SHLVL=2
_=/usr/bin/env
And the diff of the 2:
1,4d0
< AGENT_HOME=/u01/app/oracle/product/10.2.0/agent10g
< _=/bin/env
< G_BROKEN_FILENAMES=1
< HISTSIZE=1000
6,11d1
< HOSTNAME=rlldb.celgene.com
< INPUTRC=/etc/inputrc
< JAVA_HOME=/usr/java/j2re1.4.2_08/
< LANG=en_US.UTF-8
< LESSOPEN=|/usr/bin/lesspipe.sh %s
< LL_ALL=en_US
13,16d2
< LS_COLORS=no=00:fi=00:di=00;34:ln=00;36:pi=40;33:so=00;35:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:ex=00;32:*.cmd=00;32:*.exe=00;32:*.com=00;32:*.btm=00;32:*.bat=00;32:*.sh=00;32:*.csh=00;32:*.tar=00;31:*.tgz=00;31:*.arj=00;31:*.taz=00;31:*.lzh=00;31:*.zip=00;31:*.z=00;31:*.Z=00;31:*.gz=00;31:*.bz2=00;31:*.bz=00;31:*.tz=00;31:*.rpm=00;31:*.cpio=00;31:*.jpg=00;35:*.gif=00;35:*.bmp=00;35:*.xbm=00;35:*.xpm=00;35:*.png=00;35:*.tif=00;35:
< MAIL=/var/spool/mail/oracle
< MANPATH=:/opt/IBM/director/man
< ORACLE_BASE=/u01/app/oracle
19,21c5,7
< PATH=/u01/app/oracle/product/10.1.0/Db_1/bin:/usr/java/j2re1.4.2_08//bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/opt/IBM/director/bin:/home/oracle/bin:/sbin://u01/app/oracle/product/10.1.0/Db_1/OPatch:/u01/app/oracle/dbvisit/dbvisit:/u01/app/oracle/product/10.1.0/Db_1/bin
< PWD=/home/oracle/scripts
< SHELL=/bin/bash
PATH=/usr/bin:/bin:/u01/app/oracle/product/10.1.0/Db_1/bin
PWD=/home/oracle
SHELL=/bin/sh23,28c9
< SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass
< SSH_CLIENT=10.20.241.84 2988 22
< SSH_CONNECTION=10.20.241.84 2988 10.20.1.29 22
< SSH_TTY=/dev/pts/4
< TERM=xterm
< USER=oracle
_=/usr/bin/env-----
I've even gone to the extreme, of taking every single environment variable from my command line environment, and putting all of them into the shell script. but still no luck... I've also contacted my Linux SysAdmins, to see if they have any thoughts/input on this. -
Problem with Backgrounf Jobs in RAR 5.3
Hi All,
I Scheduled Background Jobs in RAR 5.3 ,But the jobs status showing from past 7 days as " Running " Please help me regarding this issue.its very urgent ,because of this problem My SOD report delayed for this month.
Every month it is taking 2 to 3 days to complete the jobs, But this time it is taking 7 day ,so i terminate the jobs and asked Basis team to restart the bg job service to stop the jobs of RAR
Previous months we are using same verient ,But at that time RAR 5.3 with SP13 and 2 threads,Now they changed as per below detals.
I am using RAR 5.3 SP 15 with 4 threads.
Below is the Job Log Dispay(unable to past complete Log)
=======================
Nov 8, 2011 12:14:46 AM com.virsa.cc.common.util.ExceptionUtil logError
SEVERE: null
Nov 8, 2011 12:14:58 AM com.virsa.cc.xsys.bg.BatchRiskAnalysis getBAPIRoleData
INFO: -- Last Run Date is 2011-11-07
Nov 8, 2011 12:14:58 AM com.virsa.cc.xsys.bg.BatchRiskAnalysis getBAPIRoleData
INFO: -- Current Date is 2011-11-08
Nov 8, 2011 12:14:58 AM com.virsa.cc.common.util.ExceptionUtil logError
SEVERE: null
java.lang.NullPointerException
Risk Analysis Time: Started @:Tue Nov 08 04:30:36 UTC 2011
performActPermAnalysis
INFO: Detailed Analysis Time:
Risk Analysis Time: Started @:Tue Nov 08 04:30:36 UTC 2011
Rule Load Time: Started @:Tue Nov 08 04:30:36 UTC 2011
Rule Load Time:14millisec
Org Rule Loop Time: Started @:Tue Nov 08 04:30:36 UTC 2011
Rule Loop Time: Started @:Tue Nov 08 04:30:36 UTC 2011
Rule Loop Time:5303millisec
Org Rule Loop Time:5303millisec
Risk Analysis Time:5481millisec
Nov 8, 2011 4:30:41 AM com.virsa.cc.xsys.riskanalysis.AnalysisEngine riskAnalysis
INFO: End Analysis Engine->Risk Analysis ..... memory usage: free=2024M, total=6144M
Nov 8, 2011 4:30:42 AM com.virsa.cc.common.RiskAnalysisReport render
INFO: RiskAnalysisReport render: memory changed=0M,free=2020M, total=6144M
Please help me regarding this issue.
thanks in advance,
suresh kumar
Edited by: K S KUMAR on Nov 8, 2011 6:36 AMHi Ashish,
My Background Job is Adhoc , This is not the first time doing ,every month i am doing this activity for SOD report (13 BG Jobs)and other reports,last month they updated SP13 to SP15.Now i am facing this issue.once BG Job Service restarted all the jobs running properly ,Why like that.i need root cause for this.
please help me regarding this.
thanks,
suresh kumar -
I see a lot of people out there, specifically windows OS owners (xp all the way to 7), having issues with iTunes 10.5. Here are the following issues I've encountered (some may only have one problem, others my have a combo pack of problems).
1. iphone 4S or older won't sync or backup
2. iTunes store won't open
The issue I found after using the diagnostics tool in iTunes (which is useless in identify scripting issues or major communication issues), reviewing the windows event log, and reinstalling all apple software and components, was the fact that my spyware and anti virus software was causing a communication error within apple's software. I use spyware doctor with anti virus. I had to uninstall spyware doctor in order for iTunes 10.5 to work. After that, everything worked well and I did a clean install of spyware doctor.
If the issue still persists, you might have to reset your internet protocol (TCP/IP). You have to run a command prompt as an administrator and enter the following: netsh winsock reset
Visit this Microsoft link for more info about resetting your IP: support.microsoft.com/kb/299357
Finally words: Apple, like all other software companies, continually encounter software glitches and issues because they don't test products correctly before bringing them to market- this has been a reoccurring theme for the last 2-3 years. The most important aspect of using any software should be functionality, efficiency, ease of use, and above all "A Positive End User Experience." Wasn't that Steve Job's mantra? We can all hope that Apple embraces his philosophy and vision for the many years to come. This, above all, is not a great start for Apple moving forward without their true inspirational leader.Here is what worked for me for both 10.5 and 10.5.1 after hours of frustration with Itunes stopping and shutting down in the middle of a sync or tranferring Apps downloaded on my Ipad2 or Iphone 4:
Start iTunes in iTunes Safe Mode:
1. hold down the Shift and Control key together and click to start iTunes.
2. when you see the iTunes icon in the task bar, click on it.
3. you will get a box that says "iTunes is running in safe mode. Visual plug-ins you have installed have been termporarily disabled." [I don't know what plug-ins I installed but you should clearly see that these so-called visual plug-ins are likely the cause of your issues];
4. click Continue to open iTunes.
You could already have your device connected to your computer or do it after opening iTunes in its safe mode.
Now press your sync button and everything should work fine.
I also disabled Genius in iTunes preferences but I'm not sure if this is one of the issues causing these types of problems.
I hope this works. This may be the workaround for all of us Windows 7 64-bit folks. -
Performance Issues with SDV03V02
Hi,
The order rescheduling program(SDV03V02) runtime is too long. We have searched for notes in the OSS database but have not found any suitable notes. One of the oss notes suggested implementing indexes on VAPMA and VBUK but that has not helped either. We are thinking of writing a Z program which would submit the standard SAP program into multiple jobs-The Z program would schedule the program into multiple jobs using different sets of materials.
Has anyone faced runtime issues with this program? Any other suggestions?
Regards,
AravindAravind,
the program invokes availability check for all backourders based on material.
I would check the following :
- how many documents it's going to process ?
- if the number of documents is too big and there are really old docs - you may have a problem with order status update (your order never goes to "complete" status even if all items are rejected or fully shipped - just an example)....
- if you really need to reschedule many orders - you'll have to consider parallel rescheduling.... BUT it's not as simple as submit parallel jobs for different materials (material ranges)....as jobs will very likely fail due to a contention.
- another option to speed up the process (may really help) is to change timing of all order outputs which are set to process immediately to '1' -> to be processed via RSNAST00, then schedule a job after your rescheduling is complete to process all these outputs. It may really help as you separate output processing from avail.check/order save.... and since same order may be saved several times during rescheduling -> it may really help. You can do it by defining a "timing" routine for your output types.
- always good to run SQL trace or to just monitor this job in SM50 to see what it's doing... OR run it under new USER ID and check statistics for that ID to see which programs/tables took the most time.
Would be good to measure an average order processing time (check how many orders were processed in this job... I hope it has this info in the spool or you can calculate it based on messages in job log). It may help to determine if the problem is casued by too slow order processing or just by increased order volume.
There are many options to look at regarding rescheduling...but I would start from order processing performance.
Maybe you are looking for
-
Dear Experts, I used component wdr_select_options to achieve select options functionality, But it displaying only one field as per my requirement user expecting both low & high fields. Can you guide for 5 select options whether i have use t times the
-
Hello! I am a college student trying to speak to my friend through ichat. He has the new aim triton for windows and I have the newest version of Ichat and yet we are unable to connect our webcams. Does anyone have any suggestions to how we can make i
-
Problem Opening image/jpg File
When I open a jpg file to edit, the photo has large blocks of the image that are replicated and overlap other parts of the image (see attached). Running on new Lenovo T500 laptop, so display drivers and system reqs are fine. Some small files (<150K)
-
Recently my Aperture is acting up badly. It all started a couple of weeks ago when after opening Aperture and clicking on one of my pictures I would get a brownish screen with "unsupported image format". Yiikes Shortly after that Aperture would start
-
What is serial number and what about batchmangemen t
hi genius can any one know about serial numbers in which industry it is useful. What is batchmanagement is it mm consultant job or sap consultant job