Job-scheduling on a specific instance on source system
Hello All,
Is there any possibility to schedule an infopackage in BW and tell the job that the corresponding job on source system (e.g. R/3-system) should run on a specific instance?
Thanks,
Pavan
Hi,
I guess the job is related to extraction job in R/3 that relates to the BI load..
Here if this job runs on a particaular RFC user always , depending on the user u can resrtict the instance on which it is supposed to run...
Take the help from BASIS they can fix the issue....
rgds,
Similar Messages
-
Background job is running for long tome in source system (ECC)
Hi All,
Background job is running for long tome in source system (ECC) while extracting data to PSA.
I checked in ECC system SM66,SM50 the job is still running
in SM37 the job is Active
There are only maximum 7000 records the extractor is 2LIS_02_ITM but it is taking 11 to 13 hours to load to PSA daily
I had checked enhancements every thing is correct.
Please help me on this how can I solve this issue.
Regards
Supraja KHi sudhi,
The difference between Call customer enhancement... and Result of customer enhancement:... is very less we can say this as a second.
The difference is between LUWs confirmed and 1 LUWs -
and Call customer enhancement -
Please find the job log details below, and give me the solution to ressolve this
01:06:43 ***01:06:43 * ztta/roll_extension........... 2000000000 * R8 050
R8 048
01:06:43 1 LUWs confirmed and 1 LUWs to be deleted with function module RSC2_QOUT_CONFIRM_DATA RSQU 036
06:56:31 Call customer enhancement BW_BTE_CALL_BW204010_E (BTE) with 5.208 records R3 407
06:56:31 Result of customer enhancement: 5.208 records R3 408
06:56:31 Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 5.208 records R3 407
06:56:31 Result of customer enhancement: 5.208 records R3 408
06:56:31 PSA=1 USING SMQS SCHEDULER / IF [tRFC=ON] STARTING qRFC ELSE STARTING SAPI R3 299
06:56:31 Synchronous send of data package 1 (0 parallel tasks) R3 410
06:56:32 tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE = R3 038
06:56:32 tRFC: Start = 00.00.0000 00:00:00, End = 00.00.0000 00:00:00 R3 039
06:56:32 Synchronized transmission of info IDoc 3 (0 parallel tasks) R3 414
06:56:32 IDOC: Info IDoc 3, IDoc No. 1549822, Duration 00:00:00 R3 088
06:56:32 IDoc: Start = 04.10.2011 06:56:32, End = 04.10.2011 06:56:32 R3 089
06:56:32 Altogether, 0 records were filtered out through selection conditions RSQU 037
06:56:32 Synchronized transmission of info IDoc 4 (0 parallel tasks) R3 414
06:56:32 IDOC: Info IDoc 4, IDoc No. 1549823, Duration 00:00:00 R3 088
06:56:32 IDoc: Start = 04.10.2011 06:56:32, End = 04.10.2011 06:56:32 R3 089
06:56:32 Job finished 00 517
Regards
Supraja -
Specific rights on source-system connection?
Hi all,
we are up to connect our Solution Manager BW system to our "standard" BW system.
But due to security reasons, we only want to allow access to specific datasources from the SolMan BW system.
We tried to set up the rights for the BATCH/background-Users accordingly, but couldn't succeed in setting a specific restriction e.g. for replication of datasources or even data-load.
What we could see is that we can either restrict access in full or allow it in full...
So that's exactly the question: is there any way we can set specific rights on replication, activaction and dataload from the source-system?
Best regards,
bivisionHi,
I guess the job is related to extraction job in R/3 that relates to the BI load..
Here if this job runs on a particaular RFC user always , depending on the user u can resrtict the instance on which it is supposed to run...
Take the help from BASIS they can fix the issue....
rgds, -
BIREQU_* job consuming more time in R/3 Source system
Hi Experts,
I am performance issues while extracting data from SAP R/3, after upgrade to oracle.
R/3 job BIREQU_* takes more time in selection of data.
Particularly in step
**02.08.2008 15:32:38 ***************************************************************************
*02.08.2008 16:37:04 533 LUWs confirmed and 533 LUWs to be deleted with function module RSC2_QOUT_CONFIRM_DATA*
02.08.2008 15:32:38 Job started
02.08.2008 15:32:38 Step 001 started (program SBIE0001, variant &0000000013512, user ID BW_BG)
02.08.2008 15:32:38 Asynchronous transmission of info IDoc 2 in task 0001 (0 parallel tasks)
02.08.2008 15:32:38 DATASOURCE = 0UC_BILLORD
02.08.2008 15:32:38 *************************************************************************
02.08.2008 15:32:38 * Current Values for Selected Profile Parameters *
02.08.2008 15:32:38 *************************************************************************
02.08.2008 15:32:38 * abap/heap_area_nondia......... 2000683008 *
02.08.2008 15:32:38 * abap/heap_area_total.......... 2000683008 *
02.08.2008 15:32:38 * abap/heaplimit................ 40894464 *
02.08.2008 15:32:38 * zcsa/installed_languages...... DEN *
02.08.2008 15:32:38 * zcsa/system_language.......... N *
02.08.2008 15:32:38 * ztta/max_memreq_MB............ 2047 *
02.08.2008 15:32:38 * ztta/roll_area................ 6500352 *
02.08.2008 15:32:38 * ztta/roll_extension........... 2000683008 *
02.08.2008 15:32:38 *************************************************************************
02.08.2008 16:37:04 533 LUWs confirmed and 533 LUWs to be deleted with function module RSC2_QOUT_CONFIRM_DATA
02.08.2008 16:38:18 Call customer enhancement BW_BTE_CALL_BW204010_E (BTE) with 6.597 records
02.08.2008 16:38:18 Result of customer enhancement: 6.597 records
02.08.2008 16:38:18 Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 6.597 records
02.08.2008 16:38:18 Result of customer enhancement: 6.597 records
02.08.2008 16:38:18 Asynchronous send of data package 1 in task 0002 (1 parallel tasks)
02.08.2008 16:38:18 IDOC: Info IDoc 2, IDoc No. 3256989, Duration 00:00:00
02.08.2008 16:38:18 IDoc: Start = 02.08.2008 15:32:38, End = 02.08.2008 15:32:38
02.08.2008 16:38:19 Altogether, 0 records were filtered out through selection conditions
02.08.2008 16:38:19 Asynchronous transmission of info IDoc 3 in task 0003 (1 parallel tasks)
02.08.2008 16:38:19 IDOC: Info IDoc 3, IDoc No. 3256996, Duration 00:00:00
02.08.2008 16:38:19 IDoc: Start = 02.08.2008 16:38:19, End = 02.08.2008 16:38:19
02.08.2008 16:38:28 tRFC: Data Package = 1, TID = 0AB50A6764EE4894715A019E, Duration = 00:00:10, ARFCSTATE =
02.08.2008 16:38:28 tRFC: Start = 02.08.2008 16:38:18, End = 02.08.2008 16:38:28
02.08.2008 16:38:28 Synchronized transmission of info IDoc 4 (0 parallel tasks)
02.08.2008 16:38:29 IDOC: Info IDoc 4, IDoc No. 3256997, Duration 00:00:01
02.08.2008 16:38:29 IDoc: Start = 02.08.2008 16:38:28, End = 02.08.2008 16:38:29
02.08.2008 16:38:29 Job finished
I am facing this problem while extracting data in case of Delta uploads.
for full uploads it is working fine.
What might be the problem,
Please advice.Hi all,
Does anyone of you got the solution for the original problem ?
Step taking too long to finish?
n LUWs confirmed and n LUWs to be deleted with function module RSC2_QOUT_CONFIRM_DATA
Regards,
Sanjyot
Edited by: Surekha Shembekar/ Sanjyot Mishra on Jul 14, 2009 9:07 AM -
Bw extraction process starting after a process/job in source system
Hello gurus,
I want to start a extraction process (a chain process) in BW system after a background program (job background process) has finished in the source system (which is an R/3 system).
My solution sound like this: I make a RFC function in BW that start the event XX, and planning the process chain to start at event XX (all this in BW side).
In source system I call the RFC function when the desired background process has finish.
I look for another solution (more simple maybe) for my task, or for any another suggestion.
thanksHi,
Refer the note :135637
(1) Create an event in the BW system
Define a system event (for example SAP_BW_TRIGGER) with Transaction SM62.
(2) Create a function module in the BW system
Create a function module (for example Z_BW_EVENT_RAISE) with the help of Transaction SE37. For the source code refer to the attachment of this note.
(3) Use the below code in code of the triggered job in the R/3 System.Here BIMCLNT000 is the logical system name of the BI system. you need to cange it to yours.
parameters: rfcdest like rfcdisplay-rfcdest default 'BIMCLNT000'.
call function 'Z_BW_EVENT_RAISE'
destination rfcdest
exporting
eventid = 'SAP_BW_TRIGGER'.
Regards,
Anil Kumar Sharma .P -
R/3 connection to BIW - No Idocs arrived from the source system
Question,
Hi Team,
I have issues during loading attribute data from R/3 source system to BI,
I go to the path
Data Warehousing Workbench - Modeling window. In the DataSources view, my application component Group ##. On my DataSource i Create Info Package and save it,
Later I select the following options
Full Update = select
On the Processing tab page, select only PSA.
On the Schedule tab page, choose Start Data Load Immediately and start the data load.
Till here everything is in active version and saved, and connection to the source system is active and working fine.
However when I do loading I received following messages, what I observed in Monitor window,
Data was requested OK
Request still running
Diagnosis
No errors found. The current process has probably not finished yet.
System Response
The ALE inbox of BI is identical to the ALE outbox of the source system
or
the maximum wait time for this request has not yet been exceeded
or
the background job has not yet finished in the source system.
Current status
No Idocs arrived from the source system.
When I go to detail tab, under transfer section, I receive the following message
Data Package 1 : arrived in BW ; Processing : Selected number does not agree with transferred n
But when I actually go to PSA maintain section and select data and no of records, I can see the that data is loaded into PSA section,
But when I chose to run the transformation I donu2019t get data here,
Kindly help me to resolve this issue,
Regards,
BluSKyHi,
the BIW i am using , its compact BIW, which is in EXI, and i execute transaction /nrsa1 to go biw development workbench,
i am not sure about transactions settings about r/3 side,
could you plz throw some light on this issue,
regards
blusky -
hi alll,
I am working with BI.In that i am getting the following error while scheduling
"Job terminated in source system --> Request set to red"
no other error messages are displayed.any one know the reason why i am getting like this plz post reply..
thanks in advance
NagaHi,
The job has been been terminated in the source system.
Go to the source system sm37 and see the cancelled job list for the user name remote user name,
when we run an infopackage in the bw it will trigger a collection job in the source sytem so that collection job has been terminted in the source sytem
kindly c the job log and also the short dumps in st22.
Kindly tel us the message in that short dump r reason for cancellation int he job log
so that we can comment on that
Hope this helps
Janardhan Kumar -
Instance failure, in System.Data in sql 2012
On Win Server 2012 Standard I have a C# app that runs fine with sql 2008 express. When I installed sql 2012 express I get "instance Failure" source: System.data. when the following LYNQ query executes:
//MessageBox.Show("Starting GetInitialPortStatus.");
db =new
DataClasses1DataContext(); //establish
dataconnection to CALMasterSQL
// 2. Query creation.
var PortsQuery =from
portrecord in db.Channels
select portrecord;
// 3. Query execution.
int count = 0;
// MessageBox.Show("Starting >portrecord in PortsQuery<.");
foreach (var
portrecord in PortsQuery)
// add rows to the grid for each record in the database Ports table
Otherwise the app works fine. The closet fix in the forum I could find was to run
exec sp_configure 'user instances enabled',
1.
Go
Reconfigure
Which I did but no change.
How can this be corrected. It seems like it is a known issue but no fix for sql 2012?
Thanks
Morris
Thank you MorrisThanks for your reply.
The connection string is:
<add name="CMMainWindow.Properties.Settings.CALLMASTERConnectionString"
connectionString="Data Source=HLA2012\\SQL2012EXP;Initial Catalog=CALLMasterSQL;Integrated Security=True;Connect Timeout=5"
providerName="System.Data.SqlClient" />
The SQl instance is call "SQL2012EXP". There is also SQL 2008 Express on the same machine called SQLExpress.
Both engines work fine in every other way.
Thanks
Morris
Thank you Morris
Hi MoCoder,
According to your description, if it contains a double-slash between the server name and the database instance name in your connection string, it should be localhost\SQLEXPRESS. There is a similar issue about instance failure when connecting to SQL Server
Express in C# application. You can review the following article.
http://www.hanselman.com/blog/FixingInstanceFailureWhenConnectingToSQLServer2005Express.aspx
In addition, Make sure your SQL Server instance(MSSQLSERVER) is running via the services management console.
Thanks,
Sofiya Li
If you have any feedback on our support, please click here.
Sofiya Li
TechNet Community Support -
Field mismatched from Source system
Hi ,
In BI Data target,for ZZAFLD000012 field , data is not coming from CRM source system , when checking in CRM side ,it doesn't contain any data & mapped in the Standard Data source 0CRM_SRV_PROCESS_H of BI . In CRM .
In CRM source system ZZAFLD00003P fields contains the REQUIRED Data ,which is not mapped in the data source : 0CRM_SRV_PROCESS_H of BI .
How can i bring the data to BI ?what are the authorization is required in CRM-user ?
Thanks in Advance,
Regards,
kalimHi,
I will suggest you to check a few places where you can see the status
1) SM37 job log (In source system if load is from R/3 or in BW if its a datamart load) (give request name) and it should give you the details about the request. If its active make sure that the job log is getting updated at frequent intervals.
Also see if there is any 'sysfail' for any datapacket in SM37.
2) SM66 get the job details (server name PID etc from SM37) and see in SM66 if the job is running or not. (In source system if load is from R/3 or in BW if its a datamart load). See if its accessing/updating some tables or is not doing anything at all.
3) RSMO see what is available in details tab. It may be in update rules.
4) ST22 check if any short dump has occured.(In source system if load is from R/3 or in BW if its a datamart load)
5) SM58 and BD87 for pending tRFCs.
Once you identify you can rectify the error.
If all the records are in PSA you can pull it from the PSA to target. Else you may have to pull it again from source infoprovider.
If its running and if you are able to see it active in SM66 you can wait for some time to let it finish. You can also try SM50 / SM51 to see what is happening in the system level like reading/inserting tables etc.
If you feel its active and running you can verify by checking if the number of records has increased in the data tables.
Thanks,
JituK -
HELP: 0 from 0 records: Initialization Option of source system
I executed an 'Initialization option for source system' on the scheduler and no data is coming from the source system.
Steps taken:
1. Delete data from target DSO
2. Picked InfoPackage and selected it returns back 'Initialization option for source system' on the scheduler
3. Deleted the initialization setting
4. Executed InfoPackage
Result: "0 from 0 records" found. How do I fill the queue table in the source system if not records are available? T
5. I go to delete the request in the PSA to retry: "No data in the PSA".
This is to resolve a HIGH status ticket because I stopped delta process of other Infoproviders until this situation is resolved.
Can you provide detail steps to properly create a trigger delta data from R/3 from a "Initialization Option of source system". We have cancelled delta undates until this is resolved.Hi
Now that you have run the init once, there is no need to delete the init flag
The delta pointer is set now, you can run the delta Ip from now onwards.
But if there were already some previous deltas done for this, and the delta had failed, because of which you have to initialise the delta again, then you need to delete the previous initialisation.
To delete the init flag.
Open your delta IP in RSA1(or in the PC)
Goto the scheduler menu option
initialization option for source system
There you will see that a window will pop up which will show a succesful load, and you will see a tick mark in the first column.
If you see a cross mark that would mean that the initialization has failed.
this mark whihc you see is called the init flag.
If you need to reinitialize the delta, then you have to delete this .
Select the entire row and click the third button at the bottom.
This will delete the initialization for source system( i.e init flag)
now, before running the delta, you have to run the init IP again.
it depends on the scenario whether you hav to run with data transfer or without data transfer.
Hope this clarifies the query
Regards
Shilpa -
SLT Replication for the same table from Multiple Source Systems
Hello,
With HANA 1.0 SP03, it is now possible to connect multiple source systems to the same SLT Replication server and from there on to the same schema in SAP HANA - does this mean same table as well? Or will it be different tables?
My doubt:
Consider i am replicating the information from KNA1 from 2 Source Systems - say SourceA and SourceB.
If I have different records in SourceA.KNA1 and SourceB.KNA1, i believe the records will be added during the replication and as a result, the final table has 2 different records.
Now, if the same record appears in the KNA1 tables from both the sources, the final table should hold only 1 record.
Also, if the same Customer's record is posted in both the systems with different values, it should add the records.
How does HANA have a check in this situation?
Please throw some light on this.Hi Vishal,
i suggest you to take a look to SAP HANA SPS03 Master Guide. There is a comparison table for the three replication technologies available (see page 25).
For Multi-System Support, there are these values:
- Trigger-Based Replication (SLT Replication): Multiple source systems to multiple SAP HANA instances (one source system can be connected to one SAP HANA schema only)
So i think that in your case you should consider BO Data Services (losing real-time analytics capabilities of course).
Regards
Leopoldo Capasso -
Hi
Are there any limits to what SAP systems can be created as Source Systems in any particular BW instance ?
My current BW instance has Source Systems set up for instances of ECC 6.0 and SRM 5.0, both used to manage our organisation's business.
Our area also runs a separate SAP ECC 6.0 client as a sort of bureau service for other organisations. It runs on a different platform.
Is there any reason why we couldn't create a Source System in our BW to connect to this separate SAP client ?
JohnHi,
We can create more then one ECC as a Source for One BW system. You can see teh InfoObejct differenet infoobejcts to identify the source systems like below.
Few eg:
0LOG_SYS
0APO_LOGSYS
0CRM_LOGSYS
You create Source system for another ECC and extract the data to BW, it is not a problem, bcoz logical system names are different.
Thanks
Reddy -
Data loading from source system takes long time.
Hi,
I am loading data from R/3 to BW. I am getting following message in the monitor.
Request still running
Diagnosis
No errors could be found. The current process has probably not finished yet.
System response
The ALE inbox of the SAP BW is identical to the ALE outbox of the source system
and/or
the maximum wait time for this request has not yet run out
and/or
the batch job in the source system has not yet ended.
Current status
in the source system
Is there anything wrong with partner profile maintanance in the source system.
Cheers
SenthilHi,
I will suggest you to check a few places where you can see the status
1) SM37 job log (In source system if load is from R/3 or in BW if its a datamart load) (give request name) and it should give you the details about the request. If its active make sure that the job log is getting updated at frequent intervals.
Also see if there is any 'sysfail' for any datapacket in SM37.
2) SM66 get the job details (server name PID etc from SM37) and see in SM66 if the job is running or not. (In source system if load is from R/3 or in BW if its a datamart load). See if its accessing/updating some tables or is not doing anything at all.
3) RSMO see what is available in details tab. It may be in update rules.
4) ST22 check if any short dump has occured.(In source system if load is from R/3 or in BW if its a datamart load)
5) SM58 and BD87 for pending tRFCs and IDOCS.
Once you identify you can rectify the error.
If all the records are in PSA you can pull it from the PSA to target. Else you may have to pull it again from source infoprovider.
If its running and if you are able to see it active in SM66 you can wait for some time to let it finish. You can also try SM50 / SM51 to see what is happening in the system level like reading/inserting tables etc.
If you feel its active and running you can verify by checking if the number of records has increased in the data tables.
SM21 - System log can also be helpful.
Thanks,
JituK -
System slow during loading data from source system
hi,
I am trying to load master data from r/3 into bw in quality envirnoment by means of a process chain. The problem is it being a master data load is consuming a lot of time. My development and quality environments are maintained on the same server. I doubt about this being something related to memory. If anybody could mention the ways through which memory can be monitored along with the reason for the slow nature of the system would be very helpful.
Source system: R/3.
Environment : Q03(quality)
Load : Master data(full load)Hi,
I will suggest you to check a few places where you can see the status
1) SM37 job log (In source system if load is from R/3 or in BW if its a datamart load) (give request name) and it should give you the details about the request. If its active make sure that the job log is getting updated at frequent intervals.
Also see if there is any 'sysfail' for any datapacket in SM37.
2) SM66 get the job details (server name PID etc from SM37) and see in SM66 if the job is running or not. (In source system if load is from R/3 or in BW if its a datamart load). See if its accessing/updating some tables or is not doing anything at all.
3) RSMO see what is available in details tab. It may be in update rules.
4) ST22 check if any short dump has occured.(In source system if load is from R/3 or in BW if its a datamart load)
5) SM58 and BD87 for pending tRFCs.
Once you identify you can rectify the error.
If all the records are in PSA you can pull it from the PSA to target. Else you may have to pull it again from source infoprovider.
If its running and if you are able to see it active in SM66 you can wait for some time to let it finish. You can also try SM50 / SM51 to see what is happening in the system level like reading/inserting tables etc.
If you feel its active and running you can verify by checking if the number of records has increased in the data tables.
Thanks,
JituK -
Job Scheduler DS consumes all connections in Pool.
Hi all,
I use Weblogic 12.1.3.
My objective is to create a job scheduler, with commonj Timer API to run a in a cluster environment.
For this, I created a cluster with two node servers (I created this in the integrated weblogic server in Jdeveloper, for simplicity).
Then I created the DataSource which points to the table where the weblogic_timers and active tables are, which are needed for persistance of the timers, targeted this DS on the nodes in the cluster, and then went to the Cluster -> Configuration -> Scheduling and selected the respective data source as "Data Source For Job Scheduler".
After I do this and the servers are up, all the connections in the DS pool are consumed. It seems like connections are continuosly made from weblogic to the database.
The connection itself seems ok, since I can connect from SQLDeveloper and also tested it when I created the DataSource.
If I have a look in the logs of the two servers, I see errors like this:
<BEA-000627> <Reached maximum capacity of pool "JDBC Data Source-2", making "0" new resource instances instead of "1".>
Can you give me an idea of what the issue might be?
Please let me know if I should provide more information.
Thanks.It's not an issue WebLogic can address. The thortech application is independently using
the UCP connection pool product, not WebLogic connection pooling, so Thortech APIs
would be the only way to debug/reconfigure the UCP.
Maybe you are looking for
-
Including XQuery Functions as part of Tranport Headers in OSB
Hi, I want to include the current date and time as part of the name of the file generated by a business service. I tried to use an XQuery Context function which returns current date and time as part of fileName in the transport header (in proxy servi
-
Does importing my photos using the connection kit import in full resolution.
I have been using the connection kit to import my photos. I was wondering if it imports them thin full resolution?
-
Movement type 122 - return delivery
Dear gurus, I have material type FHMI-PRTS. During return delivery with movement type 122, system shows following messages. qty balances are not sufficient in the register Pl. elaborate in details, Regrds, Devendra
-
Itunes 9 for windows 7 64bit? doesn't work. anyone knows?
i tried to install both itunes 32bit and 64bit for windows 7 but it doesn't work anyone knows.... now i cant upload any music for my ipod nano 5th gen.... need your help...
-
I am trying to add an email alias at icloud.com and it keeps saying "this alias cannot be saved at this time"