DTP picking 0 records
Hi Experts,
I laoded data till PSA using infopackage; got around 7000 records. And when I executed the DTP these records are not picked and resulted in 0 records in DSO. There are no filters in DTP level.
Please advice. This is in production system
Hi,
What is the update from PSA to DSO at DTP level?
Under Execution tab processing mode shuld be: Serial Extraction and processing of source package...
rgds,
Similar Messages
-
Picking records with distinct col values
Hi gurus,
I have a table with
id col1 col2 col3
1 a b c
2 e f g
3 a b c
4 a b c
what would be the query to pick records with distinct col1,col2,col3 values.i.e records with col values 1,a,b,c and 2,e,f,g only in the above table
Could you please help me on this?
Best Regards
SridhaSQL> with t as
2 (
3 select 1 COL1,'A' COL2,'B' COL3,'C' COL4 FROM DUAL UNION ALL
4 select 2,'D' COL1,'E' COL2,'F' COL3 FROM DUAL UNION ALL
5 select 3,'A' COL1,'B' COL2,'C' COL3 FROM DUAL UNION ALL
6 select 4,'G' COL1,'H' COL2,'I' COL3 FROM DUAL UNION ALL
7 select 5,'D' COL1,'E' COL2,'F' COL3 FROM DUAL
8 )
9 SELECT COL1, COL2, COL3, COL4
10 FROM T
11 /
COL1 C C C
1 A B C
2 D E F
3 A B C
4 G H I
5 D E F
SQL> with t as
2 (
3 select 1 COL1,'A' COL2,'B' COL3,'C' COL4 FROM DUAL UNION ALL
4 select 2,'D' COL1,'E' COL2,'F' COL3 FROM DUAL UNION ALL
5 select 3,'A' COL1,'B' COL2,'C' COL3 FROM DUAL UNION ALL
6 select 4,'G' COL1,'H' COL2,'I' COL3 FROM DUAL UNION ALL
7 select 5,'D' COL1,'E' COL2,'F' COL3 FROM DUAL
8 )
9 SELECT COL1, COL2, COL3, COL4
10 FROM
11 (
12 SELECT COL1, COL2, COL3, COL4, ROW_NUMBER () OVER (PARTITION BY COL2, COL3, COL3 ORDER BY COL1) AS RN
13 FROM T
14 )
15 WHERE RN=1
16 /
COL1 C C C
1 A B C
2 D E F
4 G H I
SQL> -
Hi All,
Request you to please assist me in handling the error stack.
Here is the scenario..
There were error records in the DTP...i have corrected the errors in the error stack.
Now, i need to know how to go about to post these error records.
Thanks
HariHi
need to perform to handle data records with errors:
Failed status of DTP in DTP Process Monitor because of invalid character in records.
By clicking on Error Stack we can check error records.
Total of 3 records with error in source data.
correcting erroneous records in Error Stack by clicking edit button on top left.
Creating Error DTP from the update tab of standard DTP.
Once Error DTP gets created, we can check the status of Standard DTP which is changed from create to display, and can also check the Error DTP under the Object for which we created the standard DTP.
Here is Error DTP:
Schedule the Error DTP from Execute tab.
In the Error DTP process monitor itu2019s showing 3 records that we corrected in Error Stack in earlier steps.
We can also check the status of Standard DTP, itu2019s also Green now (without errors).
You can also check the records updated status of Standard and Error DTP in the manage tab of data target. -
Possible to pick records with out identifier in a file?
Hi All,
I have a file with header,trailor and detail.. I want to know does each should have a seperate identifier or BODS can pick up with out any identifier??
Say Header has a identifier H and Tailor has T where as Detial doesnt have any .... Is it possible to pick and process the file in BODS for a header..its respective Tailor and its respective detials???
Thanks
RajeevHi,
I'm asuming your file contains only one set of Header,Detail,Trailer. Also I'm assuming that the first field in the detail records would never begin with H or T ? If so then you can define three separate file formats, one for the header, one for the details and one for the trailer. If the header and trailer records are identical in structure then you just need one file format for both of them and a seperate one for the details. In the file format for the header/trailer you will define the record structúre which would obviously include the indicator field. When you want to read the header record use the header file format (or header trailer combined format) and specify WHERE indicator='H' in the Query transform, similarly when you want to read the trailer records use the trailer file format (or the combined one) and specify WHERE indicator='T'. Lastly when reading the detail records use the detail file format and specify WHERE indicator <> 'H' and indicator <> 'T'.
Hope this helps. -
How to check at what time the extractor has picked records from r3 tables
Hi experts ,
If we want to know exactly at what time the Extractor has picked up the records from r/3 tables .
or if we want to know the time stamp of extractor picking the records from r3 tables after r3 entries
Regards ,
Subash BalakrishnanHi,
The following are few function modules which will give you the information you need based upon the area you are working in.
SD Billing: LOG_CONTENT_BILLING
Delivery: LOG_CONTENT_DELIVERY
Purchasing: LOG_CONTENT_PURCHASING etc...
See if the above FMs help you in any way... -
How to print check box in ALV list display and how to pick selected ones
Hi
i am displaying one ALV list dispaly. for that im adding one check box fields by filling the fieldcat as below:
wa_fldcat-checkbox = 'X'.
wa_fldcat-edit = 'X'.
but the check box is showing disable mode only. i want to display that check box and if i select that check box i want pick that records. for ALV grid i found one FM to pick records of selectedones as below.
DATA ref_grid TYPE REF TO cl_gui_alv_grid.
IF ref_grid IS INITIAL.
CALL FUNCTION 'GET_GLOBALS_FROM_SLVC_FULLSCR'
IMPORTING
e_grid = ref_grid.
ENDIF.
IF ref_grid IS NOT INITIAL.
CALL METHOD ref_grid->check_changed_data.
ENDIF.
but how can i do for list display to pick those selected one records.
Can any one sugget regarding this.
Thanks in advance.
Rahul.Hi,
Thanks. now it's enabled. but how can we pick the records from that list whichever i selected through that check box.
i found this one for ALV grid:
DATA ref_grid TYPE REF TO cl_gui_alv_grid.
IF ref_grid IS INITIAL.
CALL FUNCTION 'GET_GLOBALS_FROM_SLVC_FULLSCR'
IMPORTING
e_grid = ref_grid.
ENDIF.
IF ref_grid IS NOT INITIAL.
CALL METHOD ref_grid->check_changed_data.
ENDIF.
but how for ALV normal list display.
Thanks.
rahul -
Changing update mode in Error DTP
Hi Gurus,
I have some issues with data in PSA. I am trying to solve it using Error DTP.
While Executing DTP one record failed to load giving bad request. Now i created an error DTP. By default the extraction mode is Delta.
DTP extraction Mode->Full
Error DTP Extraction Mode -> Delta; How can i make it full. If i amm trying to change it, The option is disabled.
Thanks in advance!
Regards
AnuHi Anu,
In my system original DTP is full and when i create error DTP, i got error DTP extraction mode is full.
Yes you are right, it is disabled.can you check the update mode of your original DTP once.
once the error DTP is created, we can not change the update of it.
Regards,
Venkatesh. -
ANSI 834 Full Profile System Extract - DTP*303
Hi!
I've a requirement to generate a DTP*303 segment for passing Person Changes Effective Date. The DTP*303 record layout is available under Seeded ANSI 834 Change only Profile. Please let me know if there is a way to add / generate the DTP*303 segment under the Full profile system extract. Also, please explain the purpose of the Person Change Effective Date segment i.e. which are the appropriate scenarios in which the DTP*303 segment be used. Thanks in advance for your help.
VamseeGurus: Please help me understand if the DTP303 segment can be used in a full profile system extract. Thanks. Vamsee
-
Programming Logic required for pulling the records for past month /week
Hi All
I need help in the SQL programming logic.
Oracle Database Version: 10.2.0.3.0
Requirement
In a data warehouse environment, I need to programme for weekly and monthly automated batch jobs to insert the data from Data_tbl to Reporting_tbl for generating reports. Tables descriptions are given below.
Table1 - Data_tbl (Source table –this table gets updated everyday).
Record_dt first_name last_name
Table2 - Reporting_tbl_(Target table)
Cycle_dt first_name last_name
1. Monthly Report
In the SQL Query, I have where clause condition—
Where Record_dt >=’01-nov-08’ and record_dt<=’30-nov-08’
Using the above condition in development, I am pulling the data from source table for the past month data. This will be repeated every month and it should be automated.
i.e., if I run this report any time in dec 2008, it should pick records of dates from Nov 01st to Nov 30th 2008. if I run this report any time in Jan 2009, it should pick records of dates from Dec 01st to Dec 31st 2008.
Date Values should be assigned for past month. Value of Cycle_dt in target table should be end date of past month like 30-nov-2008, 31-dec-2008.
2. Weekly Report
In the SQL Query, I have where clause condition—
Where Record_dt >=’01-dec-08’ and record_dt<=’07-dec-08’
Here week start day is Monday and end day is Sunday.
If I run the report between Dec 08th to Dec 14th , it should pull records of dates from Dec 01st to Dec 07th 2008.
On Dec 15th, it should pick from Dec 08th to Dec 14th.
Value of Cycle_dt in target table should be end date of past week like 07-Dec-2008, 14-Dec-2008.
Please help me with the logics for both Monthly and Weekly reports.
ThanksHi,
For the monthly report, instead of
Where Record_dt >=’01-nov-08’ and record_dt<=’30-nov-08’say:
Where Record_dt >= TRUNC (ADD_MONTHS (SYSDATE, -1), 'MM')
and record_dt < TRUNC (SYSDATE, 'MM')SYSDATE is the current DATE.
TRUNC (SYSDATE, 'MM') is the beginning of the current month. (Notice the condition above is less than this date, not equal to it.)
ADD_MONTHS (STSDATE, -1) is a date exactly one month ago, therefore it is in the previous month.
For the weekly report, instead of:
Where Record_dt >=’01-dec-08’ and record_dt<=’07-dec-08’say
Where Record_dt >= TRUNC (SYSDATE - 7, 'IW')
and record_dt < TRUNC (SYSDATE, 'IW')TRUNC (dt, 'IW') is the beginning of the ISO week (Monday-Sunday) that contains dt. Again, notice the end condition is strictly less than the beginning of the current week. -
Dynamic Table with Random Records
What I am trying to do is select random records from a table
and display them in a dynamic table with max columns set to 3 and
the 4th record to be on a new row. Below is what I have right now
and it works to randomly pick records but has no function to set
columns in a table. If there is an easier way feel free to let me
know. I have tried various ways to do this but none seem to work.
<CFQUERY NAME="getItems" DATASOURCE="absi">
SELECT catfit.*, modcats.*, prodmat.*, prod.* FROM catfit,
modcats,
prodmat, prod WHERE prodmat.prodid=catfit.prodid And
catfit.catid=modcats.catid
ORDER BY modl ASC </cfquery>
<cfif getItems.recordCount>
<cfset showNum = 3>
<cfif showNum gt getItems.recordCount>
<cfset showNum = getItems.recordCount>
</cfif>
<cfset itemList = "">
<cfloop from="1" to="#getItems.recordCount#"
index="i">
<cfset itemList = ListAppend(itemList, i)>
</cfloop>
<cfset randomItems = "">
<cfset itemCount = ListLen(itemList)>
<cfloop from="1" to="#itemCount#" index="i">
<cfset random = ListGetAt(itemList, RandRange(1,
itemCount))>
<cfset randomItems = ListAppend(randomItems, random)>
<cfset itemList = ListDeleteAt(itemList,
ListFind(itemList, random))>
<cfset itemCount = ListLen(itemList)>
</cfloop>
<cfloop from="1" to="#showNum#" index="i">
<cfoutput>
<table width="205" border="0" align="left"
cellpadding="0" cellspacing="0">
<tr>
<td width="235" height="116"> <div
align="center"><img
src="../Products/ProductPictures/#getitems.pic[ListGetAt(randomItems,
i)]#" width="100"></div></td>
</tr>
<tr>
<td
class="ProdTitle">#getitems.brand[ListGetAt(randomItems,
i)]# #getitems.modl[ListGetAt(randomItems, i)]#</td>
</tr>
<tr>
<td
class="paragraph">$#getitems.prc[ListGetAt(randomItems,
i)]#</td>
</tr>
<tr>
<td><A
href="../Products/details.cfm?prodid=#getItems.prodid[ListGetAt(randomItems,
i)]#" class="linkcontact">more
info</a></td>
</tr>
<tr>
<td> </td>
</tr>
</table>
</cfoutput>
</cfloop>
</cfif>To start a new row after 3 records, do something like this.
<table>
<tr>
<cfoutput query="something">
<td>#data#<td>
<cfif currentrow mod 3 is 0>
</tr><tr>
</cfoutput>
</tr>
</table>
You should also know that your approach is very inefficient
in that you are bringing in to cold fusion more data than you need.
First of all you are selecting every field from 3 tables when you
don't appear to be using all of them. Second, you are selecting
every record and you only want to use 3. There are better ways out
there, but they are db specific and you did not say what you are
using. -
Duplicate DTP in Process chain
Hi folks,
I have 2 separate loads (infopackages) scheduled that load into infoobject via a DTP. I can't run these both before running the DTP as this may result in duplicate entries. Ideally, I would run the infopackage 1 followed by DTP and then infopackage 2 followed by the same DTP. Within the process chain workbench, it does not allow for this which is understandable. Is there a way to do this in a process chain? or 2 separate process chains is the only way to achieve this?
Thanks.Hi Intel
It sorts the entries and delete adjacent duplicate.
10000 A
10000 A
20000 B
30000 C
50000 D
50000 D
Now it will delete duplicate records like 10000,50000. For InfoObject it does not create any problem as in InfoObject level it is always OVERWRITE.
and it does all the processing before sending records to InfoObject attribute table
You can also enable Error Handling in DTP (valid records update, request green) without selecting "Duplicate records handle".
This will send all the duplicate to Error stack and you can compare and understand how it works
Regards
Anindya
Edited by: Anindya Bose on Feb 9, 2012 1:22 AM -
Hi,
I have a requirement as follows
I need to have a report (say in page1 on table1) with the checkboxes in order to select the required records.
When I click on next button, I should be able to display these selected records only to the next page(say page2) .
Page2 has submit button and when I click on submit these selected records should be inserted into a table(say table2).
I am able to create a report with checkboxes but not understanding how to collect these selected records to the next page and insert them into database table.
Kindly advice me how this can be achieved.
Thanks in Advance.
Sarvani.Sarvani,
here is an example on how to select checked rows:
http://htmldb.oracle.com/pls/otn/f?p=31517:95
You should use an array to collect the picked records - item with concatenated primary
keys. After that, you may use the example here:
http://htmldb.oracle.com/pls/otn/f?p=31517:84
to insert those records into a table.
Denes Kubicek
http://deneskubicek.blogspot.com/
http://htmldb.oracle.com/pls/otn/f?p=31517:1
------------------------------------------------------------------- -
Hi , I am new to BI .
I am trying to create an info cube by giving three fields Custno ( Primary ) , city and Land.
When i do a DTP The records go into error stack no matter that i am giving unique key every time to custno.
can any one tellme why it keeps rejecting the records ? There are no entries in the P tables either .
some please tellme .
Regards ,
KB.Hi Kamala,
I think u are loading the cube through flat file datasource.
I assume the flat file contains data in capital letters for city and land.. or city and land infoobjects are checked for lower case.
U need to do either one of these . If not they will go to error stack and u can correct them over there and trigger error dtp.
This could be one reason i guess
Regards
vamsi -
Issue in the Delta load using RDA
Hi All,
I am facing an issue while trying to load delta using RDA from R/3 source system.
Following are the steps followed:
1. Created a realtime Generic Datasource with timestamp as delta specific field and replicated it to BI.
2. First I have created the Infopackage(Initialization with data transfer) and loaded upto PSA.
3. Created a standard DTP to load till DSO and activated the data.
4. Then created a realtime delta infopackage and assigned it to a Daemon.
5. Converted the standard DTP to realtime DTP and assigned the same to the Daemon.
6. Started the Daemon, giving an interval of 5 minutes.
In the first time, Initialization with data transfer is taking the records correctly. But when I run Daemon for taking delta records, its taking all the records again, i.e. both the previously uploaded historical data and the delta records).
Also after the first delta run, the request status in the Daemon monitor, for both Infopackage and DTP changes to red and Daemon stops automatically.
Can anyone please help me to solve these issues.
Thanks & Regards,
Salini.Salini S wrote:
In the first time, Initialization with data transfer is taking the records correctly. But when I run Daemon for taking delta records, its taking all the records again, i.e. both the previously uploaded historical data and the delta records). .
If I understand you correctly you Initially did a full load. Yes? Well next you need to do intialise & after that delta.
The reason is that if you will select delta initialisation & initialisation without data transfer- it means delta queue is initialised now & next time when you will do delta load it will pick only changed records
If you will select delta initialisation & initialisation with data transfer- It means delta queue is initialised & it will pick records in the same load.
As you know your targets will receive the changed records from the delta queue.
Salini S wrote:
Also after the first delta run, the request status in the Daemon monitor, for both Infopackage and DTP changes to red and Daemon stops automatically.
I take it the infopackage has run successfully? Did you check? If it has and the error is on the DTP then i suggest the following.
At runtime, erroneous data records are written to an error stack if the error handling for the data transfer process is activated. You use the error stack to update the data to the target destination once the error is resolved.
To resolve the error, in the monitor for the data transfer process, you can navigate to the PSA maintenance by choosing Error Stack in the toolbar, and display and edit erroneous records in the
error stack.
I suggest you create an error DTP for an active data transfer process on the Update tab page (If key fields of the error stack for DataStore objects are overwrite On the Extraction tab page under Semantic Groups, define the key fields for the error stack.) The error DTP uses the full update mode to extract data from the error stack (in this case, the source of the DTP) and transfer
it to the target that you have already defined in the data transfer process. Once the data records have been successfully updated, they are deleted from the error stack. If there are any erroneous data records, they are written to the error stack again in a new error DTP request.
As I'm sure you know when a DTP request is deleted, the corresponding data records are also deleted from the error stack.
I hope the above helps you. -
Hi
I am using BI7.0.Our design is we extract data from R3 to ODS ->Main cube -> backup cube. we are taking the all data from main cube to backup cube and delete the contents in main cube. There is reason for doing this. I observed now in the backup cube, there are negative order qty values. even same thing observed in Main cube. But however there are no Negative values in ODS. How this could happen ? Every day the delta comes from R/3 to ODS. Main cube picks this data in delta. and the Backup cube DTP picks this data daily in Full Load from Main cube and deletes the contents in Main cube.
Is it system finds no data in target, so it sends negative values ? ( not all records are negative, but only few of them)
Thanks for any advice..
VenkatHello,
What is the delta type for your datasource?
Thanks
Maybe you are looking for
-
Greetings all... hope someone can help. I inherited a huge Labview application that works well. I have been tasked with cleaning it up and adding some info to the report. The report uses the Report Generation Vi's including New Report, Append Image t
-
My iPhone 5 just randomly shut off and it won't turn back on. What do I do?
I pulled my phone out of my pocket to check something and i hit my home button to open my phone and it wouldnt turn on. I tried the lock button and that didnt ework either. I thought my phone may have died so i tried to turn it back on and that didnt
-
Running multiple instances of program in parllel processing.
Hi Experts... We have an issue where we need to update huge set of data (say more than 2000000), update is simple where some BI table needs to be updated for few fields. Now issue is that if at one go this update is done then there is memory issue, s
-
[SOLVED] /dev/null: Permission Denied
just tryin to run makechrootpkg, and was working fine 'till something happened recently.. now I get this: /usr/bin/makepkg: line 390: /dev/null: Permission denied ==> Retrieving Sources... /usr/bin/makepkg: line 461: /dev/null: Permission denied can
-
Recently Upgraded to Mac 10.6.6. and I lost my CD extensions
I recently upgraded my Mac OS to 10.6.6 and lost my CD extensions. No music CDs, no DVDs, not even the Snow Leopard cd wil mount now. Any ideas where I can download the CD extensions?