BI Z-Table as datasource unable to load into psa
Dear Experts
I have a ztable datasource. The ztable is in BI . The table contains text fields for all the attributes.
I have mapped this to 0customer attribute without mapping the text fields from the datasource / ztable.
I created an infopackage to full load all data from the table.
The infopackage was waiting at "Extraction " stage. It showed "xxx number of records were sent ".
At "processing" stage , the "data package" section shows message "A data packet is either sent with transfer method iDoc or psa to BI ......"
In the monitor wizard, the diagnosis showed: "the background job in the source system was not terminated. In this case it is probable that the tRFC to or from the soure system is incorrect. To check this, the ALE inbox of the source system must be checked and compared with the BI ALE inbox."
Please advise what could be the cause for not being able to load this data from ztable.
Thank you .
regards
Bass
Dear Venkatesh,
I was unable to find any sm37 errors.
Can I confirm that to load to 0customer in BI7, the datasource created can be of type 3.x or BI7 type?
When creating the datasource, I selected non 3.x type. Only 2 types can be selected from.
Also, I rerun the infopackage (IP) with ST01 trace on my id for authorization and SQL trace.
Nothing was found.
The IP status is yellow : "Data not received in PSA Table"
Diagnosis : "Data has not been updated in PSA Table . The request is probably still running or there was a short dump."
and it appears to be waiting for something forever. It will time out and then the request is red.
regards
Bass
Similar Messages
-
Load into PSA, then into target - in process chains
Hi
I have a question regarding first loading into PSA, then into target - using process chains
It can be done by executing a package only loading into PSA. Then using the process type "Read PSA and Update Data Target".
My problem is, that I want to split this into two seperate chains. I.e. loading all data into PSAs before loading into targets. But when I do these two tasks in seperate chains I get a syntax error shown below. Can it really be true, that you are not able to do this in seperate chains???
Thanks
Karsten
Process LOADING variant ZPAK_3WI2VMPZM3FE8Y1ELC0TKQHW7 is referenced but is not in the chain
Message no. RSPC060
Diagnosis
Process LOADING, variant ZPAK_3WI2VMPZM3FE8Y1ELC0TKQHW7 must precede the current process. This process is not however available in the chain.
System response
The system is unable to activate the chain.
Procedure
Change the current process so that it no longer references LOADING ZPAK_3WI2VMPZM3FE8Y1ELC0TKQHW7, or schedule process LOADING ZPAK_3WI2VMPZM3FE8Y1ELC0TKQHW7 in the chain so that it precedes the current process.Hello Karsten
I've discussed this with SAP and even if the response was not clear, it doesn't seem to be possible. We also wanted to do this because we have source system working 24h/24 and 7/7. So as master data and tx data can be created at any time, it was the only way to make sure all master data were available for the load of tx data (loading tx data in PSA, then loading master data, then PSA to target). Is it for the same reason that you want to dissociate the loads ?
What I've done eventually is load in an ODS not BEX relevant (so SID not required), no master check in IP, so the ODS is like the PSA. Then load master data, activate. Then load from this ODS to target. It's working fine. I do not reconsidere changing this.
Good luck
Philippe -
Microsoft Visual Basic 2010 Express.
I am new to Visual Basic programing and i am trying to understand the relationships between Datasets, database, table Adaptors. I have to following code that is is giving me the following error" Unable to load, Update requires a valid DeleteCommand
when passed DataRow collection with deleted rows".
I can track the error and its located in "OffsetTableTableAdapter.Update(MaterionOffsetDataSet.OffsetTable)" code. What am i missing?
It seems that i can delete the data on the DataGridView Table and it only displays the correct data. but my database is not updating, even though the data grid displays differently.I can determine this because, when i save the offset database, i have all
the previous uploads and all the rows that i wanted to delete are still there.
My final goal is to be able to import offset data from a CSV file, save this data on the pc, send a copy of this data to a NuermicUpDown so the customer can modify certain numbers. From here they download all the date to a controller. IF the customer
needs to modify the imported data, they can go to a tab with a data grid view and modify the table. They will also have to option to save the modified data into a csv file.
Im not sure if i am making this overcomplicated or if there is a easier way to program this.
CODE:
Private Function LoadOffSetData()
Dim LoadOffsetDialog As New OpenFileDialog 'create a new open file dialog and setup its parameters
LoadOffsetDialog.DefaultExt = "csv"
LoadOffsetDialog.Filter = "csv|*.csv"
LoadOffsetDialog.Title = "Load Offset Data"
LoadOffsetDialog.FileName = "RollCoaterOffset.csv"
If LoadOffsetDialog.ShowDialog() = Windows.Forms.DialogResult.OK Then 'show the dialog and if the result is ok then
Try
Dim myStream As New System.IO.StreamReader(LoadOffsetDialog.OpenFile) 'try to open the file with a stream reader
If (myStream IsNot Nothing) Then 'if the file is valid
For Each oldRow As MaterionOffsetDataSet.OffsetTableRow In MaterionOffsetDataSet.OffsetTable.Rows
oldRow.Delete()
'delete all of the existing rows
Next
'OffsetTableTableAdapter.Update(MaterionOffsetDataSet.OffsetTable)
Dim rowvalue As String
Dim cellvalue(25) As String
'Reading CSV file content
While myStream.Peek() <> -1
Dim NRow As MaterionOffsetDataSet.OffsetTableRow
rowvalue = myStream.ReadLine()
cellvalue = rowvalue.Split(","c) 'check what is ur separator
NRow = MaterionOffsetDataSet.OffsetTable.Rows.Add(cellvalue)
Me.OffsetTableTableAdapter.Update(NRow)
End While
Me.OffsetTableTableAdapter.Update(MaterionOffsetDataSet.OffsetTable)
MainOffset.Value = OffsetTableTableAdapter.MainOffsetValue 'saves all the table offsets
to the offset numericUpDown registers in the main window
StationOffset01.Value = OffsetTableTableAdapter.Station01Value
StationOffset02.Value = OffsetTableTableAdapter.Station02Value
myStream.Close() 'close the stream
Return True
Else 'if we were not able to open the file then
MsgBox("Unable to load, check file name and location") 'let the operator know that the file wasn't able to open
Return False
End If
Catch ex As Exception
MsgBox("Unable to load, " + ex.Message)
Return False
End Try
Else
Return False
End If
End FunctionHello SaulMTZ,
>>I can track the error and its located in "OffsetTableTableAdapter.Update(MaterionOffsetDataSet.OffsetTable)" code. What am i missing?
This error usually shows that you do not initialize the
DeleteCommand object, you could check this
article to see if you get a workaround.
>> Im not sure if i am making this overcomplicated or if there is a easier way to program this.
If you are working CSV file, you could use OleDB to read it which would treat the CSV file as a Table:
http://www.codeproject.com/Articles/27802/Using-OleDb-to-Import-Text-Files-tab-CSV-custom
which seems to be easier (in my opinion).
Regards.
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Qosmio F - Unable to load into Windows or anything
Yesterday, while studying other day outside another student bumped the table I was studying on. I would say "tapped" as it was not that hard. The normal message appears where it says the laptop will move the harddrive to a safe location etc.
Shortly after maybe 10-20mins, I get a message on the laptop screen asking me to "backup the laptop" as there was some major error or failure. I clicked on the "additional information" and it basically said all my drives C:, G: and whatever I had disk failure or something. Unable to backup the laptop as I did not have CD's present to back it up with I took it home to do i today.
The laptop is refusing to even load into windows at the moment. Unable to back up or anything. Keeps loading into options about system repair, restore, memory diagnostics etc. nothing fixes it.
I've only purchased the laptop since oct/nov of 2010. Didnt start using it till jan/feb of this year also. Should I attempt to open up the HDD as it might have loosen out at the back or? Should I send it back to manufacturer to look at? It's critical exam time for me and this is devastating! Also flying overseas late June after exams and wishing to take the laptop over there. Please help! Thanks advance for all advice and suggestions.Hi Kyo,
Your posting is a little bit hard to understand and a screenshot of this error message would be useful but if you get an message about defective HDD and all partitions you should contact an authorized service provider in your country.
Your notebook should be still under warranty so the guys can replace the HDD for free. If you need some important data you can ask if they can rescue some files and do a backup. Alternative you can try it yourself if you buy an external HDD case for example. Then you can connect the HDD via USB to another computer. -
" Error when inserting in PSA table RSTSODSPART" when loading into PSA
Hello,
Did anyone encounter something similar, and found a solution ?
BI 7.0 - EnhP 1 - SP5
When loading data from R/3 into PSA, we encounter following error message:
" Error when inserting in PSA table RSTSODSPART"
===============================================================
- the first datapackage is written successfully to PSA,
but from the second datapackage onwards the error message pops-up
" Error when inserting in PSA table RSTSODSPART"
- The original dataselection in the infopackage is somewhere stored
If the selection in the InfoPackage is reduced (e.g. only one document i.o. a period)
it stil starts loading according the original selection (the month)
- If we create a new InfoPackage and select very few data (only one datapackage)
then we can load to PSA succesfully, but when we load to DSO we can't activate the DSO.
- We remarked that:
Two versions of the PSA exist for that datasource 0FI_GL_40:
a Table(/BIC/B0000555000) in version 1 exists,
a Table(/BIC/B0000555001) in version 2 exists
Thanks in advance,
Best regardsHello All,
We had a problem similar to what Ilse Depoortere describes. The problem happens in our BW QA system a few days/weeks after it has been copied from Production. Data loads start failing with these two error messages:
RSM2 851 - Error when writing in PSA (Caller 32)
RSAR 130 - Error 7 when adding to PSA (Caller 70)
We have just implemented the fix described in note 1340371 (replaced the LIB_DBSL with patch level 247), and the problem is gone. Failed data load process chains can be restarted and finish successfully.
This has been a really useful discussion, thanks all! And thanks to my coworker Bob who found it.
Vince Vajda -
Using RSCRM_BAPI to convert data from Query and load into PSA to DSO
Hi Experts,
I have a complex query which is being executed in BeX.
I need to go in the reverse direction and extract the data from the Query and load into the PSA and then to a DSO.
I am using RSCRM_BAPI for this purpose and have got data into the extract table successfully.
But , my query has some variable screen wherein we need to give some inputs for selection.
I have create those Infoobjects and put them in the DSO.
How do I capture these inputs of the selection screen for storing in fields of my DSO as the Extract table does not store these inputs ???write the whole insert query desired by you with the help of excel formulas.they are very easy or google 'how to use excel formulas".
then copy the fromula to whole set of data in excel fields.
run the insert query in excel as batch.
i hope this helps. -
Unable to load into XMLTYPE column: enumeration value considered as boolean
HI Gentlemen,
I have a problem with my XMLTYPE load procedure. The schemas have been registered:
SQL> select any_path from resource_view where any_path like '%GKSADMIN/ICD%';
ANY_PATH
/sys/schemas/GKSADMIN/ICD
/sys/schemas/GKSADMIN/ICD/datentypen_V1.40.xsd
/sys/schemas/GKSADMIN/ICD/ehd_header_V1.40.xsd
/sys/schemas/GKSADMIN/ICD/ehd_root_V1.40.xsd
/sys/schemas/GKSADMIN/ICD/icd_body.xsd
/sys/schemas/GKSADMIN/ICD/icd_header.xsd
/sys/schemas/GKSADMIN/ICD/icd_root.xsd
/sys/schemas/GKSADMIN/ICD/keytabs_V1.40.xsd
8 Zeilen ausgewählt.The table has been created:
SQL> describe icd
Name Null? Typ
ID CHAR(2)
XML_DOCUMENT SYS.XMLTYPE(XMLSchema "ICD/i
cd_root.xsd" Element "ehd")
STORAGE Object-relational TY
PE "ICD$ICD_ROOT_TYP"And I try to insert an instance document:
SQL> @loadxmlfileascolumn_int
Geben Sie einen Wert für source_directory ein: c:\gks\kbv\c\icd\xml
alt 1: create or replace directory SOURCE_DIR as '&source_directory'
neu 1: create or replace directory SOURCE_DIR as 'c:\gks\kbv\c\icd\xml'
Verzeichnis wurde erstellt.
Geben Sie einen Wert für xmltypetable ein: icd
alt 4: INSERT INTO &XMLTypeTable
neu 4: INSERT INTO icd
Geben Sie einen Wert für id ein: 01
Geben Sie einen Wert für instancedocument ein: icdtest.xml
alt 5: VALUES (&id, XMLType(bfilename('SOURCE_DIR', '&InstanceDocument'),
neu 5: VALUES (01, XMLType(bfilename('SOURCE_DIR', 'icdtest.xml'),
declare
FEHLER in Zeile 1:
ORA-31038: Ungültiger boolean Wert: "j"
ORA-06512: in Zeile 4
SQL> spool offAs the schemas are very big, i copied only relevant parts of them to show how the values are defined (in icd_body.xsd):
<xs:element name="abrechenbar">
<xs:annotation>
<xs:documentation>Kennzeichnung, ob (abrechenbarer) ICD-10-GM-Code</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:attribute name="V" use="required">
<xs:simpleType>
<xs:restriction base="xs:string">
<xs:enumeration value="j"/>
<xs:enumeration value="n"/>
</xs:restriction>
</xs:simpleType>
</xs:attribute>
</xs:complexType>
</xs:element>
SQL> spool off
"j" and "n" are for Yes and No in German. But anyway, how could it become a Boolean? Why is it invalid?
Can you please help me so that I can load XMLTYPE contents?
Thanks, regards
Miklos HERBOLY
Sorry, I found one place where a boolean will be expected. I modified in the instance document so that it has true or false. And still something is recognized as boolean with the value of "j".
Edited by: mh**** on Jul 4, 2011 4:43 AM
Edited by: mh**** on Jul 4, 2011 5:07 AMI don't get it, it should work, right? Although I am not sure if a XMLType Object Relational column is supported, but then again this is not clear in the documentation.
The only thing I can think of is that the attribute "V" is enforced as a boolean in the database on lower Object Relational level or... -
Unable to load into XMLTYPE column
HI Gentlemen,
I have considerable difficulties with loading XML content into an XMLTYPE column. Here are the annotated schemas:
SQL> select any_path from resource_view where any_path like '%GKSADMIN/ICD%';
ANY_PATH
/sys/schemas/GKSADMIN/ICD
/sys/schemas/GKSADMIN/ICD/datentypen_V1.40.xsd
/sys/schemas/GKSADMIN/ICD/ehd_header_V1.40.xsd
/sys/schemas/GKSADMIN/ICD/ehd_root_V1.40.xsd
/sys/schemas/GKSADMIN/ICD/icd_body.xsd
/sys/schemas/GKSADMIN/ICD/icd_header.xsd
/sys/schemas/GKSADMIN/ICD/icd_root.xsd
/sys/schemas/GKSADMIN/ICD/keytabs_V1.40.xsd
8 Zeilen ausgewählt.And here is the relational table:
SQL> describe icd
Name Null? Typ
ID CHAR(2)
XML_DOCUMENT SYS.XMLTYPE(XMLSchema "ICD/i
cd_root.xsd" Element "ehd")
STORAGE Object-relational TY
PE "ICD$ICD_ROOT_TYP"Now, when I try to load an instance document (which has proven OK with xmloracle), the following happens:
SQL> @loadxmlfileascolumn_int
Geben Sie einen Wert für source_directory ein: c:\gks\kbv\c\icd\xml
alt 1: create or replace directory SOURCE_DIR as '&source_directory'
neu 1: create or replace directory SOURCE_DIR as 'c:\gks\kbv\c\icd\xml'
Verzeichnis wurde erstellt.
Geben Sie einen Wert für xmltypetable ein: icd
alt 4: INSERT INTO &XMLTypeTable
neu 4: INSERT INTO icd
Geben Sie einen Wert für id ein: 01
Geben Sie einen Wert für instancedocument ein: icdtest.xml
alt 5: VALUES (&id, XMLType(bfilename('SOURCE_DIR', '&InstanceDocument'),
neu 5: VALUES (01, XMLType(bfilename('SOURCE_DIR', 'icdtest.xml'),
declare
FEHLER in Zeile 1:
ORA-01830: Datumsformatstruktur endet vor Umwandlung der gesamten
Eingabezeichenfolge
ORA-06512: in Zeile 4
SQL> spool offSince the schemas are very big, I took only relevant parts of them to show:
<ehd:document_type_cd V="ICD" DN="ICD-Stammdatei" S="1.2.276.0.76.5.100"/>
<ehd:service_tmr V="2010-01-01..2010-12-31"/>
<ehd:origination_dttm V="2009-11-10+01:00"/>Guilty is allegedly origination_dttm with its time zone. However, consider the type resolution below that leads to xs:date. This should be able to accommodate the time zone as well.
<!-- ************************ origination_dttm_typ ********************************* -->
<xs:element name="origination_dttm" type="origination_dttm_typ">
<xs:annotation>
<xs:documentation>Erstellungsdatum</xs:documentation>
</xs:annotation>
</xs:element>
<xs:complexType name="origination_dttm_typ">
<xs:complexContent>
<xs:extension base="v_date_typ"/>
</xs:complexContent>
</xs:complexType>
v_date_typ: enthält nur den V-Attribut für einfache Datums-Angaben
<!-- ************************ v_date_typ ********************************** -->
<xs:complexType name="v_date_typ">
<xs:attribute name="V" type="xs:date" use="required"/>
</xs:complexType>When I delete the time zone portion in the instance document, it again works.
(However, other errors will then be reported--another issue.)
Can you help me to find out what is wrong? Is it my error or a bug?
Thanks, kind regards,
Miklos HERBOLY
Edited by: mh**** on Jul 3, 2011 10:55 AMThis is Oracle feature (at least in 11g R2). See Working with Timezones from XML DB Developer's Guide: http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16659/xdb05sto.htm#autoId62
The chapter also includes an example how to adjust the schema with Oracle specific annotation.
See also http://stackoverflow.com/questions/6370035/why-dbms-xmlschema-fails-to-validate-a-valid-xsdatetime/6382096#6382096
Edited by: 836290 on Jul 4, 2011 7:56 AM -
Unable to load into XMLTYPE column--ORA-21700
HI Gentlemen,
I have a nice schema-based table and want to insert into it. However, I always get ORA-21700. Which object is marked for delete? I did not mark anything.
SQL> @createIcdTable
Tabelle wurde gelöscht.
Tabelle wurde erstellt.
SQL> describe icd
Name Null? Typ
ID CHAR(2)
XML_DOCUMENT SYS.XMLTYPE(XMLSchema "ICD/i
cd_root.xsd" Element "ehd")
STORAGE Object-relational TY
PE "ICD$ICD_ROOT_TYP"And here is the load script:
SQL> @loadxmlfileascolumn_int
Geben Sie einen Wert für source_directory ein: c:\gks\kbv\c\icd\xml
alt 1: create or replace directory SOURCE_DIR as '&source_directory'
neu 1: create or replace directory SOURCE_DIR as 'c:\gks\kbv\c\icd\xml'
Verzeichnis wurde erstellt.
Geben Sie einen Wert für xmltypetable ein: icd
alt 4: INSERT INTO &XMLTypeTable
neu 4: INSERT INTO icd
Geben Sie einen Wert für id ein: 01
Geben Sie einen Wert für instancedocument ein: test.xml
alt 5: VALUES (&id, XMLType(bfilename('SOURCE_DIR', '&InstanceDocument'),
neu 5: VALUES (01, XMLType(bfilename('SOURCE_DIR', 'test.xml'),
declare
FEHLER in Zeile 1:
ORA-21700: Objekt nicht vorhanden oder zum Löschen markiert
ORA-06512: in Zeile 4
SQL> spool offCould anybody help me how to circumvent this? Or a bug again?
Thanks, regards
Miklos HERBOLY
Edited by: mh**** on Jul 4, 2011 10:31 AMHI mdrake,
Thank you for your reply. I executed the schema registration with gentables=TRUE: the same error occurred. Here are the schema dependencies--I can not find any recursion in them. Note that schema file names end in .ann (for annotated), but registered schema urls's are .xsd names.
ehd_header_V1.40.ann
<xs:include schemaLocation="ICD/datentypen_V1.40.xsd"/>
ehd_root_V1.40.ann
<xs:include schemaLocation="ICD/ehd_header_V1.40.xsd"/>
<xs:include schemaLocation="ICD/keytabs_V1.40.xsd"/>
<xs:import namespace="urn:ehd/icd/001" schemaLocation="ICD/icd_body.xsd"/>
icd_header.ann
<xs:include schemaLocation="ICD/ehd_header_V1.40.xsd"/>
icd_root.ann
<xs:include schemaLocation="ICD/ehd_root_V1.40.xsd"/>
<xs:include schemaLocation="ICD/keytabs_V1.40.xsd"/>
<xs:include schemaLocation="ICD/icd_header.xsd"/>
<xs:import namespace="urn:ehd/icd/001" schemaLocation="ICD/icd_body.xsd"/>
keytabs_V1.40.ann
datentypen_V1.40.ann
icd_body.ann
--Then I executed your query, setting NULL to ?, with the following results (I am sorry about the bulky stuff here)
SQL> @m
URL NAME TBL
ICD/datentypen_V1.40.xsd document_relationship.type_cd ?
ICD/datentypen_V1.40.xsd related_document ?
ICD/datentypen_V1.40.xsd id ?
ICD/datentypen_V1.40.xsd set_id ?
ICD/datentypen_V1.40.xsd version_nbr ?
ICD/datentypen_V1.40.xsd intended_recipient.type_cd ?
ICD/datentypen_V1.40.xsd function_cd ?
ICD/datentypen_V1.40.xsd originator.type_cd ?
ICD/datentypen_V1.40.xsd function_cd ?
ICD/datentypen_V1.40.xsd participation_tmr ?
ICD/datentypen_V1.40.xsd provider.type_cd ?
ICD/datentypen_V1.40.xsd function_cd ?
ICD/datentypen_V1.40.xsd participation_tmr ?
ICD/datentypen_V1.40.xsd group.type_cd ?
ICD/datentypen_V1.40.xsd id ?
ICD/datentypen_V1.40.xsd person_name ?
ICD/datentypen_V1.40.xsd nm ?
ICD/datentypen_V1.40.xsd GIV ?
ICD/datentypen_V1.40.xsd MID ?
ICD/datentypen_V1.40.xsd FAM ?
ICD/datentypen_V1.40.xsd PFX ?
ICD/datentypen_V1.40.xsd SFX ?
ICD/datentypen_V1.40.xsd DEL ?
ICD/datentypen_V1.40.xsd id ?
ICD/datentypen_V1.40.xsd STR ?
ICD/datentypen_V1.40.xsd HNR ?
ICD/datentypen_V1.40.xsd POB ?
ICD/datentypen_V1.40.xsd ZIP ?
ICD/datentypen_V1.40.xsd CTY ?
ICD/datentypen_V1.40.xsd STA ?
ICD/datentypen_V1.40.xsd CNT ?
ICD/datentypen_V1.40.xsd ADL ?
ICD/datentypen_V1.40.xsd patient.type_cd ?
ICD/datentypen_V1.40.xsd person ?
ICD/datentypen_V1.40.xsd birth_dttm ?
ICD/datentypen_V1.40.xsd administrative_gender_cd ?
ICD/datentypen_V1.40.xsd local_header ?
ICD/datentypen_V1.40.xsd id ?
ICD/datentypen_V1.40.xsd interface.nm ?
ICD/datentypen_V1.40.xsd version ?
ICD/datentypen_V1.40.xsd description ?
ICD/datentypen_V1.40.xsd scope.type_cd ?
ICD/icd_body.xsd kapitel_liste ?
ICD/icd_body.xsd diagnosethesaurus_liste ?
ICD/icd_body.xsd kapitel ?
ICD/icd_body.xsd nummer ?
ICD/icd_body.xsd von_icd_code ?
ICD/icd_body.xsd bis_icd_code ?
ICD/icd_body.xsd bezeichnung ?
ICD/icd_body.xsd gruppen_liste ?
ICD/icd_body.xsd gruppe ?
ICD/icd_body.xsd von_icd_code ?
ICD/icd_body.xsd bis_icd_code ?
ICD/icd_body.xsd bezeichnung ?
ICD/icd_body.xsd diagnosen_liste ?
ICD/icd_body.xsd diagnose ?
ICD/icd_body.xsd icd_code ?
ICD/icd_body.xsd bezeichnung ?
ICD/icd_body.xsd abrechenbar ?
ICD/icd_body.xsd diagnosen_liste ?
ICD/icd_body.xsd diagnosethesaurus_liste ?
ICD/icd_body.xsd kodierrichtlinien_liste ?
ICD/icd_body.xsd notationskennzeichen ?
ICD/icd_body.xsd geschlechtsbezug ?
ICD/icd_body.xsd geschlechtsbezug_fehlerart ?
ICD/icd_body.xsd untere_altersgrenze ?
ICD/icd_body.xsd obere_altersgrenze ?
ICD/icd_body.xsd altersbezug_fehlerart ?
ICD/icd_body.xsd krankheit_in_mitteleuropa_sehr_s ?
ICD/icd_body.xsd schlüsselnummer_mit_inhalt_beleg ?
ICD/icd_body.xsd infektionsschutzgesetz_meldepfli ?
ICD/icd_body.xsd infektionsschutzgesetz_abrechnun ?
ICD/icd_body.xsd diagnosethesaurus ?
ICD/icd_body.xsd hausarztkodierung ?
ICD/icd_body.xsd akr_ref ?
ICD/ehd_header_V1.40.xsd id ?
ICD/ehd_header_V1.40.xsd set_id ?
ICD/ehd_header_V1.40.xsd version_nbr ?
ICD/ehd_header_V1.40.xsd document_type_cd ?
ICD/ehd_header_V1.40.xsd service_tmr ?
ICD/ehd_header_V1.40.xsd origination_dttm ?
ICD/ehd_header_V1.40.xsd state ?
ICD/ehd_header_V1.40.xsd interface ?
ICD/ehd_root_V1.40.xsd header ?
ICD/ehd_root_V1.40.xsd body ?
ICD/ehd_root_V1.40.xsd keytabs ?
ICD/ehd_root_V1.40.xsd icd:icd_stammdaten ?
ICD/icd_header.xsd id ?
ICD/icd_header.xsd document_type_cd ?
ICD/icd_header.xsd service_tmr ?
ICD/icd_header.xsd origination_dttm ?
ICD/icd_header.xsd interface ?
ICD/icd_header.xsd id ?
ICD/icd_header.xsd interface.nm ?
ICD/icd_header.xsd version ?
ICD/icd_header.xsd description ?
ICD/icd_root.xsd header ?
ICD/icd_root.xsd body ?
ICD/icd_root.xsd keytabs ?
ICD/icd_root.xsd icd:icd_stammdaten ?
100 Zeilen ausgewählt.
SQL> spool offWhat else could it be?
Well, installation did not complain at all, but when I inspect trace.log, there ar lines such as
PRCT-1400 : Ausführung von getcrshome nicht erfolgreich. Detaillierter Fehler: localnode
[main] [ 2011-07-01 22:38:00.836 CEST ] [HAUtils.isHASConfigured:412] Ignoring exception in isHASConfigured: PRCI-1112 : Übergebener Verzeichnisname war Null
PRCI-1112 : Übergebener Verzeichnisname war Null
...Can this interfere with my problem?
Edited by: mh**** on Jul 9, 2011 1:56 AM -
When I am scheduling infopackage, request is successful saying 200 records received in PSA but no data is there in PSA.
hi ambika,,,
this could happen because of Check if PSA table is active or not, if not replicate from source system.
can u provide details msg about scheduling thr info pkg. there could loading issue in extractor settings.or selection parameter provided is invalid.
Regards
satish -
Data can not be loaded into PSA iva process chain
dear guru,
the process chain is very simple: datasource(mster data, full)--->PAS
shedule : start at 10:00 AM, hourly
when i run sm50, i can find the job is released. but when i checked in PSA, nothing found.
why?
thanks.
brgds/steveHi,
If u want u can run only the load manually by displlay variant and schedule...
But to test the chain, as it is a full load can u run the process chain again...it may work fine....
May be the old job might not have scheduled corectly, may be still in released state once the job come into active it'll be fine...
But still try to run again the chain once and revert..
rgds,
Edited by: Krishna Rao on Apr 15, 2009 12:40 PM -
Replicate a table as datasource on BW
Hi,
I am planning to create tables in BW and then somehow replicate the tables as datasources to begin loading. Is this possible? if it is, how can I do that?
Thanks,
RTHi AHP,
Actually I have these flat files that are supposed to be loaded to BW for master data infoobjects. Per Kamal, I can load through flat files using infopackages, but I am planning to do the following to ease the pain of formating the files by hand:
1. Upload the data files to BW tables using ABAP.
2. Run RSO2 from BW to create data sources.
3. Replicate in BW.
4. Create infosources.
5. Create infopackages.
6. Load to the InfoObjects.
Would you think this would work? When I replicate, would I highlight the BW source system?
Thanks
RT
Message was edited by: Rob Thomas -
Data loads into multiple InfoObjects from 0EHS_PHRASE_TEXT DataSource.
Dear Experts,
I am working on SAP HCM-BW 7.0 Implementation and am trying to load data into 8 different InfoObjects (Texts) through 0EHS_PHRASE_TEXT (Phrases) extractor.
There are many InfoPackages in place created during the previous project loading into different set of Standard BCT InfoObjects.
After migrating the 0EHS_PHRASE_TEXT DataSource from 3.x to 7.0 version, I have created different InfoObjects with Transformations and DTPs upon business requirements to load them. The relevant data is available in 0EHS_PHRASE_TEXT when checked in Extractor Checker (RSA3). However, when I create InfoPackages to load these different InfoObjects, I don't see the Data Targets listed and it makes sense since the data is first loaded into PSA and then will be loaded into Data Targets (in this case InfoObjects with Text) using DTPs.
My concern is... when I create a Process Chain to load Master Data into these Objects, the following action is happening:
1. Load 10 Records into PSA and then into InfoObject A. All 10 Records are loaded into A.
2. Load 5 Records into PSA and then into InfoObject B. Previous 10 + New 5 Records are loaded into B.
3. and it continues... for 8 InfoObjects.
My question is... is there are way that I can see the corresponding Data Target tab in InfoPackages so the Data is immediately picked from PSA of 0EHS_PHRASE_TEXT DataSource and then loaded into the corresponding InfoObjects.
Or should I make use of Transfer Rules by Re-storing the 0EHS_PHRASE_TEXT DataSource from 7.0 to 3.x version...!!
Your help is much appreciated.
Thanks,
ChanduHi Andreas
Thanks for replying and sorry for the confusion.
The extractor is delivering 10 and then 5 because of different selection parameters in the InfoPackage "selecting" which InfoObject I wish to load the texts for.
Your understanding of my requirements is correct.
Could you please elaborate Option 1 bit more..!
Regarding your Option 2, I don't want the DTPs to extract directly in full from the datasource because by doing so, all the data apart from the relevant data is also loaded into each and every InfoObject. And I don't want that to happen. I want to load only relevant data into respective InfoObjects from the PSA using DTPs.
I have tried to setup filters in DTPs with different Selection Parameters and tried to load relevant InfoObjects. For example, I am trying to load ZEHS_SUBS InfoObject with EHS_INJ_SUB_SUBSTANCE as the Selection Parameter. There are 54 records in both Source System and PSA for this parameter.
And the request is showing 54 in Transferred Records but only the very last 1 in Added Records. When I check in the Target (ZEHS_SUBS) InfoObject, it is showing only 1 Record. I have tried many combinations. They are:
1. Loaded only EHS_INJ_SUB_SUBSTANCE data from Source System into PSA and then tried to execute the relevant DTP.
2. Checked the 'Do Not Extract from PSA but Access Data Source (for Small Amounts of Data) and tried it.
3. Tried with both Delta and Full Extraction Mode.
4. Set Filter on Single Value as 'EHS_INJ_SUB_SUBSTANCE' to extract this data only.
5. Set Filter Excluding all the other Single Values.
6. Checked the 'Handle Duplicate Record Keys'.
Still only the last 1 records out of 54 is showing up in the Target InfoObject.
Please let me know if you have any idea as to why this is happening so.
Thanks for your time.
Chandu -
Difference between read from table or load into table in Mappings
Hi everybody.
i wanted to deploy a mapping. Following error was shown:
ora-04021 timeout occurred while waiting to lock object Mapp_xyz
I tried to deploy another Mapping which was successfull. I found out that a specific table was locked and both mappings this table is used. In the Mapping which was deployed successfully i load into this table. In the other Mapping i read from this table. So my question is, is there any difference between reading from a table in a mapping and loading into a table?
Because after the Lock was broken up the mapping was deployed successfull.
Any Information would be appreciated.
GreetingsHi,
thanks for the reply. A Processflow was aborted while he was executing the mapping. So on our DB we had a active session which was causing the lock.
Thanks.
Greetings -
Error in loading data into PSA from source system.
Hi Experts !!!
Good morning .
I am trying to load data from SRM source system into BI system.When I execute infopackage,the data is not loaded into PSA and status of the process immediately becomes yellow.After sometime it gets timed out and converts to RED.When I checked the error documentation I found that the iDOCS response was not received from the source system.Detailed error is as below.
System Response
There are idocs in the source system ALE outbox that did not arrive in the ALE inbox of BI.
Further Analysis
Check the TRFC log
you can access this log using the wizard OR menu path "Environment > Transact. RFC> In the source system
However,I was not able to navigate to this path.I think if I can navigate to this path,I can manually push IDOCS from source system to BI system.
Regards,
Mandar.Hi,
Check the DataSource in RSA3, if it is working fine and able to see the data in RSA3, there is no problem in DS level, then check the Mappings and any routines in BW for that DS, if this is also fine then check the below options.
See Dumps in ST22, SM21 also.
Check RFC Connection between ECC and BW systems, i.e. RSA1-->Source System->Right Click on Source system and Check.
You must have the following profiles to BWREMOTE or ALEREMOTE users.So add it. Bcoz either of these two users will use in background to get extract the data from ECC, so add these profiels in BW.
S_BI-WHM_RFC, S_BI-WHM_SPC, S_BI-WX_RFC
And also check the following things.
1.Connections from BW to ECC and ECC to BW in SM59
2.Check Port,Partner Profiles,and Message Types in WE20 in ECC & BW.
3.Check Dumps in ST22, and SM21.
4.If Idocs are stuck i.e see the OLTP Idoc numbers in RSMO Screen in (BW) detials tab see in bottom, you can see OLTP Idoc number and take the Idoc numbers and then goto to ECC see the status in WE05 or WE02, if error then check the log else goto to BD87 in ECC and give the Idoc numbers and execute manually and see in RSMO and refresh.
5.Check the LUWs struck in SM58,User Name = * (star) and run it and see Strucked LUWs and select our LUW and execute manually and see in RSMO in BW.
See in SDN
Re: Loading error in the production system
IDOC getting struck at TRFC (SM58)
1.Reorg of table (ARFCSSTATE )
2.Incraesing the resources in the system (no of processes and memory ) , as this issue happens owing to this.
Source system tries to send TRFC to target and if there are no WP's available it will come to Transaction Recorded state, and form here it will not try to send this TRFC.So you have to execute this manually.
Also we can increase the timeout parameter so that it can try few more times to send before actually it comes to recorded state.
Regards,
Suman
Maybe you are looking for
-
Can't see all my photos when I open iphoto
I just downloaded a bunch of pictures to iphoto, but I cannot see them when I start iphoto. I know they are in iphoto b/c I can find them when I go to the pictures/iphoto folder in the HD, but they just don't appear anywhere when I open iphoto. Any s
-
Since installing Yosemite I get a notice on screen
I get a notice on screen after finishing and closing Safari to tell me that Safari Closed unexpectedly -how can I stop this? Example here
-
Insufficient privileges upon executing stored procedure
Hello Gurus, i am new to plsql, working on a stored procedure. basically, trying to create a temporary table using dynamic sql .. below is my code,upon executing , i am encoutering insufficient privileges error. not sure where i am going wrong. any h
-
Has anyone got a clue how to solve this conundrum?
Just bought ipad air. Entered ID details. Received email on my PC. Checked 'Verify Now' box. Link failed. Repeated over 50 times. All failed. Has anyone got a clue how to solve this conundrum? The wife is tearing her hair out and ranting on about lit
-
Using Business Objects to query the CRMOD database
Has anybody successfully used Business Objects to query CRMOD via web services? Our BO people are trying to do this and would like to learn from other's experience. Regards, Jeff