1 record 2 tables. Identity duplication issue
Hi,
I have to add a file info in two different tables: the first one is the dictionary and the second one is used to store the file.
The issue is the IDs set in the second table is not correct. Could you please let me know if you see anything wrong in the following SQL:
-- Insert in the dicctionary
INSERT INTO DB1.[dbo].[FilesDictionary]
(FileRef,
FileDescription)
VALUES (@FileRef,
@FileDescription);
--Get the diccionary ID
SELECT @FileDictionaryID = SCOPE_IDENTITY()
-- Save the file in the second table using the dictionary ID as reference
INSERT INTO DB2.[dbo].[FilesStore]
(FileDictionaryID
,FileAttachment)
SELECT FileDictionaryID
, @FileAttachment
FROM DB1.[dbo].[FilesDictionary]
WHERE FileDictionaryID = @FileDictionaryID
Issue: System is saving the info like this sometimes:
Table:DB1.[dbo].[FilesDictionary]
FileDictionaryID FileDescription
123 Test
124 Test
DB2.[dbo].[FilesStore] -- Files are saved in another DB for storage limitation and the FileDiccionaryID is the link between both DBs
FileDicionaryID FileAttachment
124 --file string
<-- this id should have been 123 instead of 124
124 --file string
Should I use @@identity instead of scope_identity()? Is there any other issue? I think scope_identity() is correct based on the theory but just to be sure.
Thanks for your help
Hi,
Thanks both for your replies. I really appreciate it.
FilesDictionaryID is the primary key in the FilesDictionary table but not in the FilesStore table. So FilesDiccionaryID is always unique in the FilesDictionary table but could be duplicated in the FilesStore table (it shouldn't, based on the previous code).
Those tables are updated by the application which is calling this SP so no one could set this manually.
I am working with SQL Server 2008 so Jn_ds could be right but I am already using the OUTPUT when the variable @FilesDictionaryID is declared so...
It’s hard to test this because I couldn't reproduce the issue but DB data shows this issue is happening…
I am going to investigate about the SQL Server 2008 bug but any suggestion in the meantime is welcome :) Thanks!
Similar Messages
-
Urgent: Error-Record 39,779, segment 0001 is not in the cross-record table
Hi Gurus,
This is an urgent production issue: I got the following error-
I am updating data records from a DSO to Infocube in delta mode,
1.Record 39,779, segment 0001 is not in the cross-record table
2.Error in substep: End Routine
I dont know problem is in the End Routine or somewhere else,
The End routine is this:
PROGRAM trans_routine.
CLASS routine DEFINITION
CLASS lcl_transform DEFINITION.
PUBLIC SECTION.
Attributs
DATA:
p_check_master_data_exist
TYPE RSODSOCHECKONLY READ-ONLY,
*- Instance for getting request runtime attributs;
Available information: Refer to methods of
interface 'if_rsbk_request_admintab_view'
p_r_request
TYPE REF TO if_rsbk_request_admintab_view READ-ONLY.
PRIVATE SECTION.
TYPE-POOLS: rsd, rstr.
Rule specific types
TYPES:
BEGIN OF tys_TG_1,
InfoObject: ZVEHICLE Unique Vehicle ID.
/BIC/ZVEHICLE TYPE /BIC/OIZVEHICLE,
InfoObject: ZLOCID Mine Site.
/BIC/ZLOCID TYPE /BIC/OIZLOCID,
InfoObject: ZLOCSL Location Storage Location.
/BIC/ZLOCSL TYPE /BIC/OIZLOCSL,
InfoObject: 0VENDOR Vendor.
VENDOR TYPE /BI0/OIVENDOR,
InfoObject: ZNOMTK Nomination Number.
/BIC/ZNOMTK TYPE /BIC/OIZNOMTK,
InfoObject: ZNOMIT Nomination Item.
/BIC/ZNOMIT TYPE /BIC/OIZNOMIT,
InfoObject: ZNOMNR Nomination number.
/BIC/ZNOMNR TYPE /BIC/OIZNOMNR,
InfoObject: ZVSTTIME Vehicle Starting Time Stamp.
/BIC/ZVSTTIME TYPE /BIC/OIZVSTTIME,
InfoObject: ZVEDTIME Vehicle Ending Time Stamp.
/BIC/ZVEDTIME TYPE /BIC/OIZVEDTIME,
InfoObject: ZNETWT Net Weight.
/BIC/ZNETWT TYPE /BIC/OIZNETWT,
InfoObject: TU_GRS_WG Gross Wgt.
/BIC/TU_GRS_WG TYPE /BIC/OITU_GRS_WG,
InfoObject: ZTU_TRE_W Tare Wgt.
/BIC/ZTU_TRE_W TYPE /BIC/OIZTU_TRE_W,
InfoObject: ZCUSTWT Customer Weight.
/BIC/ZCUSTWT TYPE /BIC/OIZCUSTWT,
InfoObject: ZCAR_NO Car Number.
/BIC/ZCAR_NO TYPE /BIC/OIZCAR_NO,
InfoObject: ZINBND_ID Train Consist Inbound ID.
/BIC/ZINBND_ID TYPE /BIC/OIZINBND_ID,
InfoObject: ZOTBND_ID Train Consist Return Load.
/BIC/ZOTBND_ID TYPE /BIC/OIZOTBND_ID,
InfoObject: 0SOLD_TO Sold-to Party.
SOLD_TO TYPE /BI0/OISOLD_TO,
InfoObject: 0CUSTOMER Customer Number.
CUSTOMER TYPE /BI0/OICUSTOMER,
InfoObject: 0SHIP_TO Ship-To Party.
SHIP_TO TYPE /BI0/OISHIP_TO,
InfoObject: ZVEHI_NO Vehicle Number.
/BIC/ZVEHI_NO TYPE /BIC/OIZVEHI_NO,
InfoObject: ZCARSTDAT Car Start Date.
/BIC/ZCARSTDAT TYPE /BIC/OIZCARSTDAT,
InfoObject: ZCAREDDAT Car End Date.
/BIC/ZCAREDDAT TYPE /BIC/OIZCAREDDAT,
InfoObject: ZCARSTTIM Car Start Time.
/BIC/ZCARSTTIM TYPE /BIC/OIZCARSTTIM,
InfoObject: ZCAREDTIM Car End Time.
/BIC/ZCAREDTIM TYPE /BIC/OIZCAREDTIM,
InfoObject: 0COMPANY Company.
COMPANY TYPE /BI0/OICOMPANY,
InfoObject: ZCONTRACT Contract.
/BIC/ZCONTRACT TYPE /BIC/OIZCONTRACT,
InfoObject: 0PLANT Plant.
PLANT TYPE /BI0/OIPLANT,
InfoObject: ZLOADTIME Total Vehicle Loading time.
/BIC/ZLOADTIME TYPE /BIC/OIZLOADTIME,
InfoObject: ZSHIPDATE Shipping Date.
/BIC/ZSHIPDATE TYPE /BIC/OIZSHIPDATE,
InfoObject: ZSHIPTIME Shipping Time.
/BIC/ZSHIPTIME TYPE /BIC/OIZSHIPTIME,
InfoObject: ZMNEDDT Manifest End Date.
/BIC/ZMNEDDT TYPE /BIC/OIZMNEDDT,
InfoObject: ZMNEDTM Manifest End Time.
/BIC/ZMNEDTM TYPE /BIC/OIZMNEDTM,
InfoObject: ZLDEDDT Loaded End Date.
/BIC/ZLDEDDT TYPE /BIC/OIZLDEDDT,
InfoObject: ZLDEDTM Loaded End Time.
/BIC/ZLDEDTM TYPE /BIC/OIZLDEDTM,
InfoObject: ZMANVAR Manifest Variance.
/BIC/ZMANVAR TYPE /BIC/OIZMANVAR,
InfoObject: ZTU_TYPE Trpr Unit Type.
/BIC/ZTU_TYPE TYPE /BIC/OIZTU_TYPE,
InfoObject: ZACTULQTY Actual posted quantity.
/BIC/ZACTULQTY TYPE /BIC/OIZACTULQTY,
InfoObject: ZVEDDT Vehicle End Date.
/BIC/ZVEDDT TYPE /BIC/OIZVEDDT,
InfoObject: ZVEDTM Vehicle End Time.
/BIC/ZVEDTM TYPE /BIC/OIZVEDTM,
InfoObject: ZVSTDT Vehicle Start Date.
/BIC/ZVSTDT TYPE /BIC/OIZVSTDT,
InfoObject: ZVSTTM Vehicle Start Time.
/BIC/ZVSTTM TYPE /BIC/OIZVSTTM,
InfoObject: ZTRPT_TYP Vehicle type.
/BIC/ZTRPT_TYP TYPE /BIC/OIZTRPT_TYP,
InfoObject: 0CALMONTH Calendar Year/Month.
CALMONTH TYPE /BI0/OICALMONTH,
InfoObject: 0CALYEAR Calendar Year.
CALYEAR TYPE /BI0/OICALYEAR,
InfoObject: ZLOEDDT Quality Sent End Date.
/BIC/ZLOEDDT TYPE /BIC/OIZLOEDDT,
InfoObject: ZLOEDTM Quality sent End Time.
/BIC/ZLOEDTM TYPE /BIC/OIZLOEDTM,
InfoObject: ZATMDDT At Mine End Date.
/BIC/ZATMDDT TYPE /BIC/OIZATMDDT,
InfoObject: ZATMDTM At Mine End Time.
/BIC/ZATMDTM TYPE /BIC/OIZATMDTM,
InfoObject: ZDELAY Delay Duration.
/BIC/ZDELAY TYPE /BIC/OIZDELAY,
InfoObject: ZSITYP Schedule type.
/BIC/ZSITYP TYPE /BIC/OIZSITYP,
InfoObject: ZDOCIND Reference document indicator.
/BIC/ZDOCIND TYPE /BIC/OIZDOCIND,
InfoObject: 0BASE_UOM Base Unit of Measure.
BASE_UOM TYPE /BI0/OIBASE_UOM,
InfoObject: 0UNIT Unit of Measure.
UNIT TYPE /BI0/OIUNIT,
InfoObject: ZACT_UOM Actual UOM.
/BIC/ZACT_UOM TYPE /BIC/OIZACT_UOM,
Field: RECORD.
RECORD TYPE RSARECORD,
END OF tys_TG_1.
TYPES:
tyt_TG_1 TYPE STANDARD TABLE OF tys_TG_1
WITH NON-UNIQUE DEFAULT KEY.
$$ begin of global - insert your declaration only below this line -
... "insert your code here
$$ end of global - insert your declaration only before this line -
METHODS
end_routine
IMPORTING
request type rsrequest
datapackid type rsdatapid
EXPORTING
monitor type rstr_ty_t_monitors
CHANGING
RESULT_PACKAGE type tyt_TG_1
RAISING
cx_rsrout_abort.
METHODS
inverse_end_routine
IMPORTING
i_th_fields_outbound TYPE rstran_t_field_inv
i_r_selset_outbound TYPE REF TO cl_rsmds_set
i_is_main_selection TYPE rs_bool
i_r_selset_outbound_complete TYPE REF TO cl_rsmds_set
i_r_universe_inbound TYPE REF TO cl_rsmds_universe
CHANGING
c_th_fields_inbound TYPE rstran_t_field_inv
c_r_selset_inbound TYPE REF TO cl_rsmds_set
c_exact TYPE rs_bool.
ENDCLASS. "routine DEFINITION
$$ begin of 2nd part global - insert your code only below this line *
... "insert your code here
$$ end of 2nd part global - insert your code only before this line *
CLASS routine IMPLEMENTATION
CLASS lcl_transform IMPLEMENTATION.
Method end_routine
Calculation of result package via end routine
Note: Update of target fields depends on rule assignment in
transformation editor. Only fields that have a rule assigned,
are updated to the data target.
<-> result package
METHOD end_routine.
*=== Segments ===
FIELD-SYMBOLS:
<RESULT_FIELDS> TYPE tys_TG_1.
DATA:
MONITOR_REC TYPE rstmonitor.
*$*$ begin of routine - insert your code only below this line *-*
Fill the following fields by reading Nomination and Vehicls DSO
SOLD_TO, Customer
data: L_TIMESTAMP1 TYPE timestamp,
L_TIMESTAMP2 TYPE timestamp,
L_TIMESTAMP3 type CCUPEAKA-TIMESTAMP,
L_TIMESTAMP4 type CCUPEAKA-TIMESTAMP,
L_TIMESTAMP5 type CCUPEAKA-TIMESTAMP,
L_TIMESTAMP6 type CCUPEAKA-TIMESTAMP,
L_TIMESTAMP7 TYPE timestamp,
L_TIMESTAMP8 TYPE timestamp,
L_TIMESTAMP9 type timestamp,
L_TIMESTAMP10 type TIMESTAMP,
L_CHAR1(14),
L_CHAR2(14),
l_duration type I,
L_TS TYPE TZONREF-TZONE,
l_flag,
l_nomit TYPE /BIC/OIZNOMIT,
l_error_flag.
l_TS = 'CST'.
Data: EXTRA_PACKAGE type tyt_TG_1.
data: extra_fields type tys_TG_1.
LOOP at RESULT_PACKAGE ASSIGNING <RESULT_FIELDS>.
clear l_error_flag.
Get sold_to and customer from nomination table.
Select single SOLD_TO /BIC/ZLOCSL /BIC/ZCONTRACT COMPANY
/BIC/ZMNEDDT /BIC/ZMNEDTM /BIC/ZLDEDDT
/BIC/ZLDEDTM SHIP_TO /BIC/ZACTULQTY
/BIC/ZLOEDDT /BIC/ZLOEDTM /BIC/ZDELAY
/BIC/ZATMDDT /BIC/ZATMDTM
/BIC/ZSITYP /BIC/ZDOCIND
into (<RESULT_FIELDS>-SOLD_TO,
<RESULT_FIELDS>-/BIC/ZLOCSL,
<RESULT_FIELDS>-/BIC/ZCONTRACT,
<RESULT_FIELDS>-company,
<RESULT_FIELDS>-/BIC/ZMNEDDT,
<RESULT_FIELDS>-/BIC/ZMNEDTM,
<RESULT_FIELDS>-/BIC/ZLDEDDT,
<RESULT_FIELDS>-/BIC/ZLDEDTM,
<RESULT_FIELDS>-SHIP_TO,
<RESULT_FIELDS>-/BIC/ZACTULQTY,
<RESULT_FIELDS>-/BIC/ZLOEDDT,
<RESULT_FIELDS>-/BIC/ZLOEDTM,
<RESULT_FIELDS>-/BIC/ZDELAY,
<RESULT_FIELDS>-/BIC/ZATMDDT,
<RESULT_FIELDS>-/BIC/ZATMDTM,
<RESULT_FIELDS>-/BIC/ZSITYP,
<RESULT_FIELDS>-/BIC/ZDOCIND)
from /BIC/AZTSW_0000
where /BIC/ZNOMTK = <RESULT_FIELDS>-/BIC/ZNOMTK
AND /BIC/ZNOMIT = <RESULT_FIELDS>-/BIC/ZNOMIT.
Select Invalid Nominations
if sy-subrc <> 0.
l_error_flag = 'X'.
endif.
<RESULT_FIELDS>-customer = <RESULT_FIELDS>-SOLD_TO.
Prepare time stamp for Time Differences
Vehicle Starting Time Stamp
clear : L_TIMESTAMP9,L_TIMESTAMP10.
CONVERT DATE <RESULT_FIELDS>-/BIC/ZCARSTDAT TIME
<RESULT_FIELDS>-/BIC/ZCARSTTIM
INTO TIME STAMP L_TIMESTAMP9 TIME ZONE l_TS.
Vehicle Ending Time Stamp
CONVERT DATE <RESULT_FIELDS>-/BIC/ZCAREDDAT TIME
<RESULT_FIELDS>-/BIC/ZCAREDTIM
INTO TIME STAMP L_TIMESTAMP10 TIME ZONE l_TS.
Clear : L_TIMESTAMP3, L_TIMESTAMP4,
<RESULT_FIELDS>-/BIC/ZVEDTIME,
<RESULT_FIELDS>-/BIC/ZVSTTIME.
<RESULT_FIELDS>-/BIC/ZVEDTIME = L_TIMESTAMP10.
<RESULT_FIELDS>-/BIC/ZVSTTIME = L_TIMESTAMP9.
L_TIMESTAMP3 = L_TIMESTAMP10.
L_TIMESTAMP4 = L_TIMESTAMP9.
Caliculate the load time
IF L_TIMESTAMP3 is initial.
clear <RESULT_FIELDS>-/BIC/ZLOADTIME.
elseif L_TIMESTAMP4 is initial.
clear <RESULT_FIELDS>-/BIC/ZLOADTIME.
else.
CALL FUNCTION 'CCU_TIMESTAMP_DIFFERENCE'
EXPORTING
timestamp1 = L_TIMESTAMP3
timestamp2 = L_TIMESTAMP4
IMPORTING
DIFFERENCE = <RESULT_FIELDS>-/BIC/ZLOADTIME
ENDIF.
Caliculate the Manifest Variance
clear : L_TIMESTAMP5,L_TIMESTAMP6,L_TIMESTAMP7,L_TIMESTAMP8.
CONVERT DATE <RESULT_FIELDS>-/BIC/ZMNEDDT TIME
<RESULT_FIELDS>-/BIC/ZMNEDTM
INTO TIME STAMP L_TIMESTAMP7 TIME ZONE l_TS.
CONVERT DATE <RESULT_FIELDS>-/BIC/ZLDEDDT TIME
<RESULT_FIELDS>-/BIC/ZLDEDTM
INTO TIME STAMP L_TIMESTAMP8 TIME ZONE l_TS.
L_TIMESTAMP5 = L_TIMESTAMP7.
L_TIMESTAMP6 = L_TIMESTAMP8.
Caliculate the Maniefest Variance
IF L_TIMESTAMP5 is initial.
clear <RESULT_FIELDS>-/BIC/ZMANVAR.
elseif L_TIMESTAMP6 is initial.
clear <RESULT_FIELDS>-/BIC/ZMANVAR.
else.
CALL FUNCTION 'CCU_TIMESTAMP_DIFFERENCE'
EXPORTING
timestamp1 = L_TIMESTAMP5
timestamp2 = L_TIMESTAMP6
IMPORTING
DIFFERENCE = <RESULT_FIELDS>-/BIC/ZMANVAR
IF sy-subrc <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
ENDIF.
Delete datapackets with blank nominations
Delete datapackets with blank shipdate and Invalid Time Stamps
*IF <RESULT_FIELDS>-/BIC/ZNOMTK IS INITIAL OR
<RESULT_FIELDS>-/BIC/ZSHIPDATE IS INITIAL.
l_error_flag = 'X'.
*ENDIF.
<RESULT_FIELDS>-/BIC/ZVEHI_NO = 1.
<RESULT_FIELDS>-CALMONTH = <RESULT_FIELDS>-/BIC/ZSHIPDATE(6).
<RESULT_FIELDS>-CALYEAR = <RESULT_FIELDS>-/BIC/ZSHIPDATE(4).
if l_error_flag = 'X'.
Looks like Monitor Entries are not working in SP11.
Hence the following is commented temporarily.
CLEAR MONITOR_REC.
MONITOR_REC-MSGID = '0M'.
MONITOR_REC-MSGTY = 'S'.
MONITOR_REC-MSGNO = '501'.
MONITOR_REC-MSGV1 = <RESULT_FIELDS>-/BIC/ZNOMTK.
MONITOR_REC-recno = sy-tabix.
APPEND MONITOR_REC to MONITOR.
RAISE exception type CX_RSROUT_ABORT.
DELETE RESULT_PACKAGE index sy-tabix.
CLEAR L_ERROR_FLAG.
else.
MODIFY RESULT_PACKAGE FROM <RESULT_FIELDS>.
endif.
clear l_nomit.
l_nomit = <RESULT_FIELDS>-/BIC/ZNOMIT.
extra_fields = <RESULT_FIELDS>.
Actual Qty and Contract details
Select /BIC/ZLOCSL /BIC/ZNOMIT /BIC/ZCONTRACT /BIC/ZACTULQTY
/BIC/ZSITYP /BIC/ZDOCIND
SOLD_TO SHIP_TO COMPANY
into (extra_fields-/BIC/ZLOCSL,
extra_fields-/BIC/ZNOMIT,
extra_fields-/BIC/ZCONTRACT,
extra_fields-/BIC/ZACTULQTY,
extra_fields-/BIC/ZSITYP,
extra_fields-/BIC/ZDOCIND,
extra_fields-SOLD_TO,
extra_fields-SHIP_TO,
extra_fields-company)
from /BIC/AZTSW_0000
where /BIC/ZNOMTK = <RESULT_FIELDS>-/BIC/ZNOMTK AND
/BIC/ZNOMIT <> l_NOMIT.
INSERT extra_fields into table EXTRA_PACKAGE.
endselect.
ENDLOOP.
Append lines of extra_package to RESULT_PACKAGE.
*-- fill table "MONITOR" with values of structure "MONITOR_REC"
*- to make monitor entries
... "to cancel the update process
raise exception type CX_RSROUT_ABORT.
$$ end of routine - insert your code only before this line -
ENDMETHOD. "end_routine
Method inverse_end_routine
This subroutine needs to be implemented only for direct access
(for better performance) and for the Report/Report Interface
(drill through).
The inverse routine should transform a projection and
a selection for the target to a projection and a selection
for the source, respectively.
If the implementation remains empty all fields are filled and
all values are selected.
METHOD inverse_end_routine.
$$ begin of inverse routine - insert your code only below this line-
... "insert your code here
$$ end of inverse routine - insert your code only before this line -
ENDMETHOD. "inverse_end_routine
ENDCLASS. "routine IMPLEMENTATIONHi,
Most probably you are appending some records in the data package or deleting from the data package through end routine or expert routine or start routine.
I just solved it.....you will have to import the note 1180163.
Then modify the code which you are using....and include the function module as mentioned in the note 1223532.
You need to add the function module just before you append the records.This will work perfectly.
Thanks
Ajeet -
Oracle 11g - External Table/SQL Developer Issue?
Oracle 11g - External Table/SQL Developer Issue?
==============================
I hope this is the right forum for this issue, if not let me, where to go.
We are using Oracle 11g (11.2.0.1.0) on (Platform : solaris[tm] oe (64-bit)), Sql Developer 3.0.04
We are trying to use oracle external table to load text files in .csv format. Here is our data look like.
======================
Date1,date2,Political party,Name, ROLE
20-Jan-66,22-Nov-69,Democratic,"John ", MMM
22-Nov-70,20-Jan-71,Democratic,"John Jr.",MMM
20-Jan-68,9-Aug-70,Republican,"Rick Ford Sr.", MMM
9-Aug-72,20-Jan-75,Republican,Henry,MMM
------ ALL NULL -- record
20-Jan-80,20-Jan-89,Democratic,"Donald Smith",MMM
======================
Our Expernal table structures is as follows
CREATE TABLE P_LOAD
DATE1 VARCHAR2(10),
DATE2 VARCHAR2(10),
POL_PRTY VARCHAR2(30),
P_NAME VARCHAR2(30),
P_ROLE VARCHAR2(5)
ORGANIZATION EXTERNAL
(TYPE ORACLE_LOADER
DEFAULT DIRECTORY P_EXT_TAB_D
ACCESS PARAMETERS (
RECORDS DELIMITED by NEWLINE
SKIP 1
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' LDRTRIM
REJECT ROWS WITH ALL NULL FIELDS
MISSING FIELD VALUES ARE NULL
DATE1 CHAR (10) Terminated by "," ,
DATE2 CHAR (10) Terminated by "," ,
POL_PRTY CHAR (30) Terminated by "," ,
P_NAME CHAR (30) Terminated by "," OPTIONALLY ENCLOSED BY '"' ,
P_ROLE CHAR (5) Terminated by ","
LOCATION ('Input.dat')
REJECT LIMIT UNLIMITED;
It created successfully using SQL Developer
Here is the issue.
It is not loading the records, where fields are enclosed in '"' (Rec # 2,3,4,7)
It is loading all NULL value record (Rec # 6)
*** If we remove the '"' from input data, it loads all records including all NULL records
Log file has
KUP-04021: field formatting error for field P_NAME
KUP-04036: second enclosing delimiter not found
KUP-04101: record 2 rejected in file ....
Our questions
Why did "REJECT ROWS WITH ALL NULL FIELDS" not working?
Why did Terminated by "," OPTIONALLY ENCLOSED BY '"' not working?
Any idea?
Thanks in helping.I don't think this is a SQLDeveloper issue. You will get better answers in the Database - General or perhaps SQL and PL/SQL forums.
-
Oracle APEX 4.0 - Interactive Report - Table Column Filter Issue
Environment: Oracle APEX 4.0 - Interactive Report - Table Column header Filter Issue
We have developed an interactive report using Oracle APEX 4.0, which contains a record count of around 3,000 Rows. All the rows values are unique in nature. When we try to filter the same with the help of column header filter option available in the interactive report,We get only 1000 records.
Could some one help us, why this behaviour under APEX Table Column Header Filter as if it does not display beyond 1000 distinct values.
Is there a way or workaround on how to get all the records in the column header filter?
Thanks in advance.
KrishHi
Thanks for the advice and this issue has been moved to the below URL
Oracle APEX 4.0 - Interactive Report - Table Column Filter Issue Posted: No
Krish -
SQLDeveloper 1.5.4 Table browsing performance issue
Hi all,
I had read of previous posts regarding SQLDeveloper 1.5.3 table browsing performance issues. I downloaded and installed version 1.5.4 and it appears the problem has gotten worse!
It takes ages to display rows of this particular table (the structure is shown below). It takes much longer to view it in Single Record format. Then attempting to Export the data is another frustrating exercise. By the way, TOAD does not seem to have this problem so I guess it is a SQLDeveloper bug.
Can someone help with any workarounds?
Thanks
Chiedu
Here is the table structure:
create table EMAIL_SETUP
APPL_ID VARCHAR2(10) not null,
EML_ID VARCHAR2(10) not null,
EML_DESC VARCHAR2(80) not null,
PRIORITY_NO_DM NUMBER(1) default 3 not null
constraint CC_EMAIL_SETUP_4 check (
PRIORITY_NO_DM in (1,2,3,4,5)),
DTLS_YN VARCHAR2(1) default '0' not null
constraint CC_EMAIL_SETUP_5 check (
DTLS_YN in ('0','1')),
ATT_YN VARCHAR2(1) default '0' not null
constraint CC_EMAIL_SETUP_6 check (
ATT_YN in ('0','1')),
MSG_FMT VARCHAR2(5) default 'TEXT' not null
constraint CC_EMAIL_SETUP_7 check (
MSG_FMT in ('TEXT','HTML')),
MSG_TMPLT VARCHAR2(4000) not null,
MSG_MIME_TYPE VARCHAR2(500) not null,
PARAM_NO NUMBER(2) default 0 not null
constraint CC_EMAIL_SETUP_10 check (
PARAM_NO between 0 and 99),
IN_USE_YN VARCHAR2(1) not null
constraint CC_EMAIL_SETUP_11 check (
IN_USE_YN in ('0','1')),
DFLT_USE_YN VARCHAR2(1) default '0' not null
constraint CC_EMAIL_SETUP_12 check (
DFLT_USE_YN in ('0','1')),
TAB_NM VARCHAR2(30) null ,
FROM_ADDR VARCHAR2(80) null ,
RPLY_ADDR VARCHAR2(80) null ,
MSG_SBJ VARCHAR2(100) null ,
MSG_HDR VARCHAR2(2000) null ,
MSG_FTR VARCHAR2(2000) null ,
ATT_TYPE_DM VARCHAR2(4) null
constraint CC_EMAIL_SETUP_19 check (
ATT_TYPE_DM is null or (ATT_TYPE_DM in ('RAW','TEXT'))),
ATT_INLINE_YN VARCHAR2(1) null
constraint CC_EMAIL_SETUP_20 check (
ATT_INLINE_YN is null or (ATT_INLINE_YN in ('0','1'))),
ATT_MIME_TYPE VARCHAR2(500) null ,
constraint PK_EMAIL_SETUP primary key (EML_ID)
)Check Tools | Preferences | Database | Advanced Parameters and post the value you have there.
Try setting it to a small number and report if you see any improvement.
-Raghu -
Retrieve the Purchase Order Condition Records Table
Hallo!
I have found this code right here:
http://www.sap-basis-abap.com/sapab025.htm
It is very useful particular for purposes which I need. Please can somebody
try to fix the error to get it working. There is an internal table missing.
Regards
Ilhan
Retrieve the Purchase Order Condition Records Table
select * from ekko.
select * from konv where knumv = ekko-knumv
"Get all the condition records for the purchase order
endselect.
endselect.
* Get the info record conditions record
* First declare the record structure for the key
data: begin of int_konp,
txt1(5),
lifnr(5),
matnr(18),
txt2(4),
txt3(1),
end of int_konp.
clear: konh, konp, int_konp.
* data for the record key konh-vakey
int_konp-txt1 = '00000'.
int_konp-lifnr = ekko-lifnr+5(5).
int_konp-matnr = ekpo-matnr(18).
int_konp-txt2 = 'ALL'.
int_konp-werks = ekpo-werks.
int_konp-txt3 = '0'.
select * from konh where kschl = 'PB00' "Conditions (Header)
and datab => p_datum. "valid from date
if konh-vakey = int_konp. "Conditions (Item)
select single * from konp where knumh = konh-knumh.
continue.
endif.
endselect.Hi flora
Just get through the sequence .
see the table fields ...
1. From EKKO table take an entry which is having pricing conditions.
Now in the fields list check out for field EKKO-KNUMV(document condition number).
2.Take this condition number and now goto table KONV.
Give the document condition number in the field KONV-KNUMV and execute .
This will lead to a list of document condition numbers and some other fields .
3.Now check for field KONV-KNUMH ,KONV-KAWRT(quantity) and note the value KONV-KWERT .
(Remember this is at header level).
This is ur condition record number.
**comments
Now from document condition number we got the condition record number (KNUMH).
4. now since u want the item level tax procedure go to table KONP and give the condition record number and execute .
This will give u a list of details .
Now concentrate on KONV-KAWRT (scale quantity) KONP-KBETR(rate) as this table will store Pricing per UNIT so product of these two will give u the total pricing tax, for a particular condition type say PR00 .
For that particular condition item .
Check the pricing procedure .
See t-code VK13 and check the pricing procedure .
From me23 check the same PO num select the item and check the pricing conditions applicable .
Select a particular pricing and goto condition->analysis->analysis pricing ,
Better take help of a SD functional consultant in the process.
regards,
vijay. -
SM58 - IDoc adapter inbound: IDoc data record table contains no entries
Trying to send Idocs from SAP ECC6.0 via PI 7.0 up until 2 days ago there was no problem.
Since yesterday, only one specific type of Idoc does not make it into XI (PI). In the Idoc monitor (WE02) the idocs that were created gives status 3 which is good. But all Idocs of that specific type (ZRESCR01) does not go to XI. I can only find them bakc in SM58 where it gives the following message:
IDoc adapter inbound: IDoc data record table contains no entries
I have checked SAP notes 1157385 and also 940313, none of them gives me any more insight into this error. I have also checked all the configuration in WE20, SM59, and in XI (repository and directory) and in XI IDX1, IDX2 but could not find anything that would cause this. I can also not think of anything that changed since 2 days ago.
Please point me in the right direction.hi,
i think in sm 58 u can find entries only when there is some failure in login credential .
if there is change in IDoc structure than you have to reimport the idoc metadata defination at IDX2.otherwise not requird.
please check the logical system name pointing to the your requird target system....
please also verify thet your port should not be blocked.
pls find the link it may help
Monitoring the IDOC Adapter in XI/PI using IDX5
regards,
navneet -
Oracle 11g - External Table/Remote File Issue?
Oracle 11g - External Table/Remote File Issue?
=============================
I hope this is the right forum for this issue, if not let me, where to go.
We are using Oracle 11g (11.2.0.1.0) on (Platform : solaris[tm] oe (64-bit)), Sql Developer 3.0.04
We are not allowed to put files on the CSV file system (Server A), where DB instance is running. We are able place CSV files on another server(Server B), where DB instance is not running.
We are trying to use oracle external table to load text files in .CSV format.
How do we create a Directory (Create Directory) on Server A DB to point to File system of Server B?
Is it feasible?
Any idea?
Thanks in helping.The Solaris DBA should be able to mount the filesystem for you. Either that or you have to get creative transferring the file like this;
http://www.linkedin.com/groups/Getting-creative-external-table-preprocessor-140609.S.50474382?qid=ba673ce4-c4bb-40c5-8367-52bd2a2dfc80&trk=group_search_item_list-0-b-ttl&goback=%2Egmp_140609
Cheers
David -
Condition record table to pick printer
Hi Friends,
What is condition record table, based on this table how can i pick the printer? and how I can find this table? can any one help me its very urgent.
Regards,
DVNS.You can see the list of processed document through output types in the table NAST.
Check the table TNAPR in SE16. It will have all the output condition records assigned through transaction NACE.
You can find the output types in the field KSCHL of TNAPR table. -
Help with querying a 200 million record table
Hi ,
I need to query a 200 million record table which is partitioned by monthly activity.
But my problem is I need to see how many activities occured on one account in a time frame.
If there are 200 partitions, I need to go into all the partitions, get the activities of the account in the partition and at the end give the number of activities.
Fortunately, only activity is expected for an account in the partition which may be present or absent.
if this table had 100 records, i would use this..
select account_no, count(*)
from Acct_actvy
group by account_no;Must stress that it is critical that you not write code (SQL or PL/SQL) that uses hardcoded partition names to find data.
That approach is very risk, prone to runtime errors, difficult to maintain and does not scale. It is not worth it.
From the developer's side, there should be total ignorance to the fact that a table is partitioned. A developer must treat a partition table no different than any other table.
To give you an idea.. this a copy-and-paste from a SQL*Plus session doing what you want to do. Against a partitioned table at least 3x bigger than yours. It covers about a 12 month period. There's a partition per day - and empty daily partitions for the next 2 years. The SQL aggregation is monthly. I selected a random network address to illustrate.
SQL> select count(*) from x25_calls;
COUNT(*)
619491919
Elapsed: 00:00:19.68
SQL>
SQL> select TRUNC(callendtime,'MM') AS MONTH, sourcenetworkaddress, count(*) from x25_calls where sourcenetworkaddress = '3103165962'
2 group by TRUNC(callendtime,'MM'), sourcenetworkaddress;
MONTH SOURCENETWORKADDRESS COUNT(*)
2005/09/01 00:00:00 3103165962 3599
2005/10/01 00:00:00 3103165962 1184
2005/12/01 00:00:00 3103165962 4
2005/06/01 00:00:00 3103165962 1
2005/04/01 00:00:00 3103165962 560
2005/08/01 00:00:00 3103165962 101
2005/03/01 00:00:00 3103165962 3330
7 rows selected.
Elapsed: 00:00:19.72As you can see - not a single reference to any partitioning. Excellent performance, despite running on an old K-class HP server.
The reason for the performance is simple. A correctly designed and implemented partitioning scheme that caters for most of the queries against the table. Correctly designed and implemented indexes - especially local bitmap indexes. Without any hacks like partition names and the like... -
Selecting Records from 125 million record table to insert into smaller table
Oracle 11g
I have a large table of 125 million records - t3_universe. This table never gets updated or altered once loaded, but holds data that we receive from a lead company.
I need to select records from this large table that fit certain demographic criteria and insert those into a smaller table - T3_Leads - that will be updated with regard to when the lead is mailed and for other relevant information.
My question is what is the best (fastest) approach to select records from this 125 million record table to insert into the smaller table. I have tried a variety of things - views, materialized views, direct insert into smaller table...I think I am probably missing other approaches.
My current attempt has been to create a View using the query that selects the records as shown below. Then use a second query that inserts into T3_Leads from this View V_Market. This is very slow. Can I just use an Insert Into T3_Leads with this query - it did not seem to work with the WITH clause? My Index on the large table is t3_universe_composite and includes zip_code, address_key, household_key.
CREATE VIEW V_Market as
WITH got_pairs AS
SELECT /*+ INDEX_FFS(t3_universe t3_universe_composite) */ l.zip_code, l.zip_plus_4, l.p1_givenname, l.surname, l.address, l.city, l.state, l.household_key, l.hh_type as l_hh_type, l.address_key, l.narrowband_income, l.p1_ms, l.p1_gender, l.p1_exact_age, l.p1_personkey, e.hh_type as filler_data, 1.p1_seq_no, l.p2_seq_no
, ROW_NUMBER () OVER ( PARTITION BY l.address_key
ORDER BY l.hh_verification_date DESC
) AS r_num
FROM t3_universe e
JOIN t3_universe l ON
l.address_key = e.address_key
AND l.zip_code = e.zip_code
AND l.p1_gender != e.p1_gender
AND l.household_key != e.household_key
AND l.hh_verification_date >= e.hh_verification_date
SELECT *
FROM got_pairs
where l_hh_type !=1 and l_hh_type !=2 and filler_data != 1 and filler_data != 2 and zip_code in (select * from M_mansfield_02048) and p1_exact_age BETWEEN 25 and 70 and narrowband_income >= '8' and r_num = 1
Then
INSERT INTO T3_leads(zip, zip4, firstname, lastname, address, city, state, household_key, hh_type, address_key, income, relationship_status, gender, age, person_key, filler_data, p1_seq_no, p2_seq_no)
select zip_code, zip_plus_4, p1_givenname, surname, address, city, state, household_key, l_hh_type, address_key, narrowband_income, p1_ms, p1_gender, p1_exact_age, p1_personkey, filler_data, p1_seq_no, p2_seq_no
from V_Market;I had no trouble creating the view exactly as you posted it. However, be careful here:
and zip_code in (select * from M_mansfield_02048)
You should name the column explicitly rather than select *. (do you really have separate tables for different zip codes?)
About the performance, it's hard to tell because you haven't posted anything we can use, like explain plans or traces but simply encapsulating your query into a view is not likely to make it any faster.
Depending on the size of the subset of rows you're selecting, the /*+ INDEX hint may be doing your more harm than good. -
ADF vertical table scroll bar issue when many records.
Good morning!
I am using Oracle JDeveloper version 11.1.1.6.0.
I have a search form and a simple ready-only table displaying about 1 million of records.
At the top of my search form, I have navigation buttons (First, Previous, Next, Last).
So when I click for example on the Next button, my table will highlight the next corresponding row as expected.
But here is the issue. When I click the Last button, the vertical scroll bar of the table does not scroll (automatically) to the end and display to me the very last row of my table.
How can I achieve this? When I manually scroll to the last row, I would find it highlighted but I need the vertical scroll bar to react on Last and First buttons click even though I have a big number of rows.
What should I do?
Any suggestion will be very much appreciated
Life.Hi Frank,
I have found the answer. Here it is for anyone who would ever have or meet the same issue.
In Jsf table property inspector
I binded my table to my java bean (I called it myRichTable).
I added the First and Last buttons as partial triggers of my table
In java bean
In my last() method I encluded the following line of code: myRichTable.setDisplayRow(myRichTable.DISPLAY_ROW_LAST).
And in the first(): myRichTable.setDisplayRow(myRichTable.DISPLAY_ROW_FIRST);
Thanks! -
I am currently having an issue linking two tables in a report. When I generate the report, the records in the detail section become duplicated. To further explain, in one table I am looking for item numbers, transaction date and transaction quantity.
In the next, I am looking for item numbers, transaction dates, and shipment quantities.
In the last I am retrieving item numbers and descriptions.
The only fields that seem to be consistent between tables are the item numbers, so I am joining on that basis.
However, what ends up happening is this:
Item # Date Trans Qty Date Ship Qty
1001 (Grouping)
5/12/09 49000 5/20/09 20000
5/12/09 49000 5/28/09 12000
6/1/09 30000 5/20/09 20000
6/1/09 30000 5/28/09 12000
2001 (Group)
5/12/09 20000 5/5/09 20000
5/12/09 20000 5/19/09 12000
5/12/09 20000 6/5/09 15000
If you know why this is happening, or better yet, a way to fix it, I would greatly appreciate your help.It is happening because the item number appears multiple times in one or more of the "left tables" in the join.
How to fix it depends on what you are trying to accomplish.
If you are trying to match something like Order Details to Shipment Details against those orders, you'll need an order and line number in your Shipment Details file that reference the same fields in the Order Detail file.
If you are just looking for total ordered vs total shipped, you'll need to aggregate the ordered and shipped quantities by item BEFORE doing the join. In SQL it would look something like this (MS SQL):
select isnull(a.item_no, b.item_no) as Item_no,
isnull(a.ordered_qty,0) as ordered_qty,
isnull(b.shipped_qty, 0) as shipped_qty
from (
select item_no, sum(ordered_qty) as ordered_qty
from Order_Details
where date between {?start date} and {?end date}
group by item_no
) a
full outer join (
select item_no, sum(shipped_qty) as shipped_qty
from Shipment_Details
where date between {?start date} and {?end date}
group by item_no
) b
on a.item_no = b.item_no
HTH,
Carl -
Performance issues in million records table
I Have a scenario wherein have some 20 tables each with a million and more records. [ Historical ]
On an average I do add 1500 - 2500 records a day... i.e would add a million records every year on an average
Am looking for archival solutions for these master tables.
Operations on Archival Tables, would be limited to read.
Expected benefits
User base would be around 2500 users on the whole - but expect 300 - 500 parallel users at the max.
Very limited usage on Historical data - compared to operations on current data
Performance on operations over current data is important compared over that on historical data
Environment - Oracle 9i - Should be migrating to Oracle 10g sooner.
Some solutions i cud think of ...
[ 1 ] Put every archived record into a archival table and fetch it from there
i.e clearly distinguish searches as current or archival - prior to searching
the impact i feel is again archival tables are ever increasing by approx a million in a year
[ 2 ] Put records into various archival tables each differentiated by a year
For instance every year i do replicate the set of tables and that year data goes into that table.
how do i do a fetch??
Note - i do have a unique way of identifying each record in my master table - the primary key is based on YYYYMMXXXXXXXXXX format eg: 2008070000562330, will the year part help me in anyway to check with the correct table
The major concern is i do have a very good response based on indexing and other common things, but would not want this to downgrade in a year and more, but expect to improvise on the current response timings and also do ensure to conitnue the same over a period of time.
Also I don't want to make change to every query in my app - until there is no way out..Hi,
Read the following documentation link about Partitioning in Oracle.
Best Regards,
Alex -
hi,
I am having a issue with table control scrolling. When i was passing less number of records (say 19 records becuase table control is having 19 lines) to table control in BDC call transaction, everything is working fine. after filling the 19 line items its not taking the next line item the page is not scrolling down. T.code is <b>GS02</b>. please suggest me. following is my code
REPORT ZLOCK_WBS_ELEMENTS MESSAGE-ID ZFI_RESTMT.
TYPES *
*types declaration for final internal table
types: begin of ty_final,
ryear like zupi5a-ryear, "Fiscal year
rbukrs like zupi5a-rbukrs, "Company code
racct like zupi5a-racct, "Account number
rzzps_posid like zupi5a-rzzps_posid, "WBS element
rzzmtit like zupi5a-rzzmtit, "MPM title
rzzmfor like zupi5a-rzzmfor, "MPM format
rzzmatnr like zupi5a-rzzmatnr, "Material number
rzzcou like zupi5a-rzzcou, "Country
rzzfow like zupi5a-rzzfow, "Financial owner
rzzoow like zupi5a-rzzoow, "Operational owner
rzzcon like zupi5a-rzzcon, "Licensee Contract
rzzloc like zupi5a-rzzloc, "Licensor Contract
kostl like zupi5a-kostl, "Cost center
zzfam like zupi5a-zzfam, "Fame Number
zzfor like zupi5a-zzfor, "Format
zzprd like zupi5a-zzprd, "Product Line
zzwin like zupi5a-zzwin, "Window group
zzwig like zupi5a-zzwig, "Window
rtcur like zupi5a-rtcur, "Currency Key
tsl like zupi5a-tsl, "Amount Transaction currency
hsl like zupi5a-hsl, "Amount Co. code currency
ksl like zupi5a-ksl, "Amount Group currency
msl like zupi5a-msl, "Quantity
end of ty_final.
Data
data: j_final2 type standard table of ty_final,
v_final2 type standard table of ty_final.
data: wa_final2 type ty_final.
data: bdcdata like bdcdata occurs 0 with header line,
messtab like bdcmsgcoll occurs 0 with header line.
data :begin of i_values occurs 0.
include structure setvalues.
data :end of i_values.
data: v_counter(3) type n value '0',
v_from like rgsbl-from,
V_FROM(30) TYPE C,
v_setname like zfi_setid_cc-setid,
v_setid like sethier-setid,
n type i,
l type i,
k type i value '1',
p_rbukrs like zupi5a-rbukrs.
import p_rbukrs from memory id 'bukrsid'.
import i_final2 to j_final2 from memory id 'table'.
To eliminate duplicate WBS elements to be stored into the sets
v_final2 = j_final2.
sort v_final2 by rzzps_posid.
delete adjacent duplicates from v_final2 comparing rzzps_posid.
select single setid into v_setname
from zfi_setid_cc
where rbukrs EQ p_rbukrs.
IF sy-subrc <> 0.
MESSAGE E005.
ENDIF.
*write 'ZFIRESTATEMENT' to v_setname.
call function 'G_SET_GET_ID_FROM_NAME'
EXPORTING
shortname = v_setname
IMPORTING
new_setid = v_setid.
call function 'G_SET_TREE_IMPORT'
EXPORTING
client = sy-mandt
langu = sy-langu
setid = v_setid
TABLES
set_values = i_values.
describe table i_values lines n.
describe table v_final2 lines l.
write n to v_counter.
clear bdcdata.
refresh bdcdata.
perform bdc_dynpro using 'SAPMGSBM' '0105'.
perform bdc_field using 'BDC_CURSOR'
'RGSBM-SHORTNAME'.
perform bdc_field using 'BDC_OKCODE'
'/00'.
*perform bdc_field using 'RGSBM-SHORTNAME'
'ZFIRESTATEMENT'.
perform bdc_field using 'RGSBM-SHORTNAME'
v_setname.
loop at v_final2 into wa_final2.
v_counter = v_counter + 1.
perform bdc_dynpro using 'SAPMGSBM' '0115'.
concatenate 'RGSBL-FROM(' v_counter ')' into v_from.
perform bdc_field using 'BDC_CURSOR'
v_from.
perform bdc_field using 'BDC_OKCODE'
'/00'.
perform bdc_field using 'RGSBS-TITLE'
'FI Restatement-WBS locking'.
perform bdc_field using v_from
wa_final2-rzzps_posid.
endloop.
perform bdc_dynpro using 'SAPMGSBM' '0115'.
perform bdc_field using 'BDC_CURSOR'
v_from.
perform bdc_field using 'BDC_OKCODE'
'=SAVE'.
perform bdc_field using 'RGSBS-TITLE'
'FI Restatement-WBS locking'.
perform bdc_dynpro using 'SAPMGSBM' '0105'.
perform bdc_field using 'BDC_OKCODE'
'/EBACK'.
perform bdc_field using 'BDC_CURSOR'
'RGSBM-SHORTNAME'.
call transaction 'GS02'
using bdcdata
mode 'A'
update 'S'
messages into messtab.
Start new screen *
form bdc_dynpro using program dynpro.
clear bdcdata.
bdcdata-program = program.
bdcdata-dynpro = dynpro.
bdcdata-dynbegin = 'X'.
append bdcdata.
endform.
Insert field *
form bdc_field using fnam fval.
if fval <> ' '.
clear bdcdata.
bdcdata-fnam = fnam.
bdcdata-fval = fval.
append bdcdata.
endif.
endform.
this is working fine when they r less than 19 line items. please suggest me with the logic when it is more than 19 line items.Hi,
Just try to increase the table control lines before display.
v_counter = n + 10.
Maybe you are looking for
-
Hope somone can help with this. Basically I have a PNG image with a transparancy to sit over a gradient background, like this : http://www.hpwebdesign.co.uk/mklink/index102.shtml The little graphics next to campaign metrics are OK, but the main heade
-
How do I set up my Iphone to Itunes
how do I set up my Iphone to Itunes
-
I am trying to add a remote document to multiple folders in the KD as well as set its customized properties, we are using Plumtree v 5.02. My Java code is as follows: import com.plumtree.remote.prc.*; import java.net.*; import java.util.*; import com
-
Stop communication channel after data delivery
Hi all, I have a communication channel of type JDBC. The channel should be started and stopped via external control. The problem is that I don't know when the data transfer is completed in order to stop the channel. Or is there a functionality that a
-
Master Detail stops working when i try to download a blob filed
Hi All, I am working on Jdeveloper 11g. I have a master table with detail , the detail table has a blob field. when the user tries to download the blob field every thing works fine, but the master detail relation stops working , when the user clicks