Subtract a workpath selection from a larger workpath
my job prohibits me from sharing images so if needed i can make a quick sketch if i can't get my question across clear enough.
here is what i have.
the image is a stack of objects and i have made a work path of all the objects (one large selection labeled "All" around all objects to isolate away from background), then i have another work path of just the bottom object (labeled "Bottom").
my goal.
i do this quite often, so i'd like to be able to just subtract the "bottom" work path selection from the larger "All" work path selection.
Thanks
p.s. i tried all the FAQ and search before posting.
Thanks, I knew there was a quick command for that, I searched and searched and couldn't find it.. I have done it before too, just not in a while. As for the deleting layer content, i agree for the most part but in this case it doesn't matter in this case since I only end up with one (all original from the photographer) layer at the end and don't save in a format that supports masks.
Thanks for your quick response and helping me make an easy and more accurate work path!
Similar Messages
-
SELECTing from a large table vs small table
I posted a question few months back about teh comparison between INSERTing to a large table vs small table ( fewer number of rows ), in terms of time taken.
The general consensus seemed to be that it would be teh same, except for teh time taken to update the index ( which will be negligible ).
1. But now, following teh same logic, I m confused why SELECTINg from a large table should be more time taking ("expensive" ) than SELECTing from a small table.
( SELECTing using an index )
My understanding of how Oracle works internally is this :
It will first locate the ROWID from teh B-Tree that stores the index.
( This operation is O(log N ) based on B-Tree )
ROWID essentially contains teh file pointer offset of teh location of the data in teh disk.
And Oracle simply reads teh data from teh location it deduced from ROWID.
But then the only variable I see is searching teh B-Tree, which should take O(log N ) time for comparison ( N - number of rows )
Am I correct above.
2. Also I read that tables are partitioned for performance reasons. I read about various partiotion mechanisms. But cannot figure out how it can result in performance improvement.
Can somebody please helpuser597961 wrote:
I posted a question few months back about teh comparison between INSERTing to a large table vs small table ( fewer number of rows ), in terms of time taken.
The general consensus seemed to be that it would be teh same, except for teh time taken to update the index ( which will be negligible ).
1. But now, following teh same logic, I m confused why SELECTINg from a large table should be more time taking ("expensive" ) than SELECTing from a small table.
( SELECTing using an index )
My understanding of how Oracle works internally is this :
It will first locate the ROWID from teh B-Tree that stores the index.
( This operation is O(log N ) based on B-Tree )
ROWID essentially contains teh file pointer offset of teh location of the data in teh disk.
And Oracle simply reads teh data from teh location it deduced from ROWID.
But then the only variable I see is searching teh B-Tree, which should take O(log N ) time for comparison ( N - number of rows )
Am I correct above.
2. Also I read that tables are partitioned for performance reasons. I read about various partiotion mechanisms. But cannot figure out how it can result in performance improvement.
Can somebody please helpIt's not going to be that simple. Before your first step (locate ROWID from index), it will first evaluate various access plans - potentially thousands of them - and choose the one that it thinks will be best. This evaluation will be based on the number of rows it anticipates having to retrieve, whether or not all of the requested data can be retrived from the index alone (without even going to the data segment), etc. etc etc. For each consideration it makes, you start with "all else being equal". Then figure there will be dozens, if not hundreds or thousands of these "all else being equal". Then once the plan is selected and the rubber meets the road, we have to contend with the fact "all else is hardly ever equal". -
Hi,
I have to select from a large table called sales_data which has 1 cr rows.
It has an index.
I want to select all the rows. It is taking about 1 hr to select and insert into a new table.
Create table new_tab as select col1,col2,col3,col4 from sales_data;
Is there any way to reduce time.
TIAHave you tried serial/parallel direct-load INSERT method. It will be good if you can disable the constraint on the source_table before performing the DML.
You can give it a try (any of the 2 options below ) and see if it improves the performance.
INDEX on the source_table is NOT required -- DISBALE it before performing the DML.
SQL> <Your query to disable> constraints on the source_table
SQL > ALTER TABLE target_table NOLOGGING;
Option #1 For SERIAL direct load method,
INSERT /*+ APPEND */
Into target_table
Select * from source_table;
Or using PARALLEL Direct load method -
SQL > ALTER SESSION ENABLE PARALLEL DML
SQL > ALTER SESSION ENABLE PARALLEL DML;
SQL > INSERT /*+ PARALLEL(target_table,12) */ INTO target_table
SELECT /*+ PARALLEL(source_table,12) */ * FROM source_table;
Good luck.
Shailender Mehta -
Oracle ODBC Gateway SELECT from Sybase fails on large column
OS: CentOS 5.8 64-bit
DB: Oracle XE 11gR1 64-bit
Gateway: Oracle Gateway for ODBC 64-bit
Database and gateway reside on same Linux Server.
Connecting to remote Sybase SQL Anywhere 10 server on WindowsXP.
Using SQL Anywhere 11 odbc driver and unixODBC driver manager on Linux server.
isql tool connects without any problems.
One LISTENER, service for db and gateway on same port.
===================================
Via SQL*Plus, the following error occurs....
SQL> select * from mytable@dblink;
select * from mytable@dblink
ORA-02070: database dblink does not support outer joins in this context
Gateway does not like the 'large' column which is varchar(3270) in length. Also it is the only
column in the table that is a varchar.
Take that column out and SELECT works fine.
Here is the initdblink.ora file:
# This is a sample agent init file that contains the HS parameters that are
# needed for the Database Gateway for ODBC
# HS init parameters
HS_FDS_CONNECT_INFO=dblink
HS_FDS_SHAREABLE_NAME=/usr/lib64/libodbc.so
HS_FDS_TRACE_LEVEL=255
HS_LANGUAGE=american_america.we8iso8859p1
HS_NLS_NCHAR=UTF-8
# ODBC specific environment variables
set ODBCINI=/etc/odbc.ini
Please advise....
Edited by: user601798 on Oct 17, 2012 7:09 AM
Edited by: user601798 on Oct 17, 2012 7:09 AM
Edited by: user601798 on Oct 17, 2012 7:13 AMHere is the trace file:
Oracle Corporation --- THURSDAY OCT 18 2012 14:55:34.259
Heterogeneous Agent Release
11.2.0.1.0
Oracle Corporation --- THURSDAY OCT 18 2012 14:55:34.259
Version 11.2.0.1.0
Entered hgogprd
HOSGIP for "HS_FDS_TRACE_LEVEL" returned "255"
Entered hgosdip
setting HS_OPEN_CURSORS to default of 50
setting HS_FDS_RECOVERY_ACCOUNT to default of "RECOVER"
setting HS_FDS_RECOVERY_PWD to default value
setting HS_FDS_TRANSACTION_LOG to default of HS_TRANSACTION_LOG
setting HS_IDLE_TIMEOUT to default of 0
setting HS_FDS_TRANSACTION_ISOLATION to default of "READ_COMMITTED"
setting HS_NLS_NCHAR to default of "UCS2"
setting HS_FDS_TIMESTAMP_MAPPING to default of "DATE"
setting HS_FDS_DATE_MAPPING to default of "DATE"
setting HS_RPC_FETCH_REBLOCKING to default of "ON"
setting HS_FDS_FETCH_ROWS to default of "100"
setting HS_FDS_RESULTSET_SUPPORT to default of "FALSE"
setting HS_FDS_RSET_RETURN_ROWCOUNT to default of "FALSE"
setting HS_FDS_PROC_IS_FUNC to default of "FALSE"
setting HS_FDS_CHARACTER_SEMANTICS to default of "FALSE"
setting HS_FDS_MAP_NCHAR to default of "TRUE"
setting HS_NLS_DATE_FORMAT to default of "YYYY-MM-DD HH24:MI:SS"
setting HS_FDS_REPORT_REAL_AS_DOUBLE to default of "FALSE"
setting HS_LONG_PIECE_TRANSFER_SIZE to default of "65536"
setting HS_SQL_HANDLE_STMT_REUSE to default of "FALSE"
setting HS_FDS_QUERY_DRIVER to default of "TRUE"
setting HS_FDS_SUPPORT_STATISTICS to default of "FALSE"
Parameter HS_FDS_QUOTE_IDENTIFIER is not set
setting HS_KEEP_REMOTE_COLUMN_SIZE to default of "OFF"
setting HS_FDS_GRAPHIC_TO_MBCS to default of "FALSE"
setting HS_FDS_MBCS_TO_GRAPHIC to default of "FALSE"
Default value of 32 assumed for HS_FDS_SQLLEN_INTERPRETATION
setting HS_CALL_NAME_ISP to "gtw$:SQLTables;gtw$:SQLColumns;gtw$:SQLPrimaryKeys;gtw$:SQLForeignKeys;gtw$:SQLProcedures;gtw$:SQLStatistics;gtw$:SQLGetInfo"
setting HS_FDS_DELAYED_OPEN to default of "TRUE"
setting HS_FDS_WORKAROUNDS to default of "0"
Exiting hgosdip, rc=0
ORACLE_SID is "dblink"
Product-Info:
Port Rls/Upd:1/0 PrdStat:0
Agent:Oracle Database Gateway for ODBC
Facility:hsa
Class:ODBC, ClassVsn:11.2.0.1.0_0008, Instance:dblink
Exiting hgogprd, rc=0
hostmstr: 2056122368: HOA After hoagprd
hostmstr: 2056122368: HOA Before hoainit
Entered hgoinit
HOCXU_COMP_CSET=1
HOCXU_DRV_CSET=31
HOCXU_DRV_NCHAR=1000
HOCXU_DB_CSET=873
HOCXU_SEM_VER=110000
Entered hgolofn at 2012/10/18-14:55:39
Exiting hgolofn, rc=0 at 2012/10/18-14:55:39
HOSGIP for "HS_OPEN_CURSORS" returned "50"
HOSGIP for "HS_FDS_FETCH_ROWS" returned "100"
HOSGIP for "HS_LONG_PIECE_TRANSFER_SIZE" returned "65536"
HOSGIP for "HS_NLS_NUMERIC_CHARACTER" returned ".,"
HOSGIP for "HS_KEEP_REMOTE_COLUMN_SIZE" returned "OFF"
HOSGIP for "HS_FDS_DELAYED_OPEN" returned "TRUE"
HOSGIP for "HS_FDS_WORKAROUNDS" returned "0"
HOSGIP for "HS_FDS_MBCS_TO_GRAPHIC" returned "FALSE"
HOSGIP for "HS_FDS_GRAPHIC_TO_MBCS" returned "FALSE"
Invalid value of 32 given for HS_FDS_SQLLEN_INTERPRETATION
treat_SQLLEN_as_compiled = 1
Exiting hgoinit, rc=0 at 2012/10/18-14:55:40
hostmstr: 2056122368: HOA After hoainit
hostmstr: 2056122368: HOA Before hoalgon
Entered hgolgon at 2012/10/18-14:55:40
reco:0, name:dba, tflag:0
Entered hgosuec at 2012/10/18-14:55:41
Exiting hgosuec, rc=0 at 2012/10/18-14:55:41
HOSGIP for "HS_FDS_RECOVERY_ACCOUNT" returned "RECOVER"
HOSGIP for "HS_FDS_TRANSACTION_LOG" returned "HS_TRANSACTION_LOG"
HOSGIP for "HS_FDS_TIMESTAMP_MAPPING" returned "DATE"
HOSGIP for "HS_FDS_DATE_MAPPING" returned "DATE"
HOSGIP for "HS_FDS_CHARACTER_SEMANTICS" returned "FALSE"
HOSGIP for "HS_FDS_MAP_NCHAR" returned "TRUE"
HOSGIP for "HS_FDS_RESULTSET_SUPPORT" returned "FALSE"
HOSGIP for "HS_FDS_RSET_RETURN_ROWCOUNT" returned "FALSE"
HOSGIP for "HS_FDS_PROC_IS_FUNC" returned "FALSE"
HOSGIP for "HS_FDS_REPORT_REAL_AS_DOUBLE" returned "FALSE"
using dba as default value for "HS_FDS_DEFAULT_OWNER"
HOSGIP for "HS_SQL_HANDLE_STMT_REUSE" returned "FALSE"
Entered hgocont at 2012/10/18-14:55:42
HS_FDS_CONNECT_INFO = "dblink"
RC=-1 from HOSGIP for "HS_FDS_CONNECT_STRING"
Entered hgogenconstr at 2012/10/18-14:55:43
dsn:dblink, name:dba
optn:
Entered hgocip at 2012/10/18-14:55:43
dsn:dblink
Exiting hgocip, rc=0 at 2012/10/18-14:55:43
##>Connect Parameters (len=25)<##
## DSN=dblink;
#! UID=dba;
#! PWD=*
Exiting hgogenconstr, rc=0 at 2012/10/18-14:55:44
Entered hgolosf at 2012/10/18-14:55:44
ODBC Function-Available-Array 0xFFFE 0x01FF 0xFF00 0xFFFF 0x03FF 0x0000
0x0000 0x0000 0x0000 0x0000 0x0000 0x0000
0x0000 0x0000 0x0000 0x0000 0x0000 0x0000
0x0000 0x0000 0x0000 0x0000 0x0000 0x0000
0x0000 0x0000 0x0000 0x0000 0x0000 0x0000
0x0000 0x0000 0x0000 0x0000 0x0000 0x0000
0x0000 0x0000 0x0000 0x0000 0x0000 0x0000
0x0000 0x0000 0x0000 0x0000 0x0000 0x0000
0x0000 0x0000 0x0000 0x0000 0x0000 0x0000
0x0000 0x0000 0x0000 0x0000 0x0000 0x0000
0x0000 0x0000 0xFE00 0x3F5F
Exiting hgolosf, rc=0 at 2012/10/18-14:55:46
DriverName:DBODBC10.DLL, DriverVer:10.00.0001
DBMS Name:SQL Anywhere, DBMS Version:10.00.0001
Exiting hgocont, rc=0 at 2012/10/18-14:55:47
SQLGetInfo returns N for SQL_CATALOG_NAME
Exiting hgolgon, rc=0 at 2012/10/18-14:55:48
hostmstr: 2027339776: HOA After hoalgon
RPC Calling nscontrol(0), rc=0
hostmstr: 2027339776: RPC Before Upload Caps
hostmstr: 2027339776: HOA Before hoaulcp
Entered hgoulcp at 2012/10/18-14:55:48
Entered hgowlst at 2012/10/18-14:55:48
Exiting hgowlst, rc=0 at 2012/10/18-14:55:49
SQLGetInfo returns 0x1f for SQL_OWNER_USAGE
TXN Capable:3, Isolation Option:0xf
SQLGetInfo returns 128 for SQL_MAX_SCHEMA_NAME_LEN
SQLGetInfo returns 128 for SQL_MAX_TABLE_NAME_LEN
SQLGetInfo returns 128 for SQL_MAX_PROCEDURE_NAME_LEN
SQLGetInfo returns " (0x22) for SQL_IDENTIFIER_QUOTE_CHAR
SQLGetInfo returns Y for SQL_COLUMN_ALIAS
3 instance capabilities will be uploaded
capno:1989, context:0x00000000, add-info: 0
capno:1991, context:0x0001ffff, add-info: 0
capno:1992, context:0x0001ffff, add-info: 0
Exiting hgoulcp, rc=0 at 2012/10/18-14:56:05
hostmstr: 2026291200: HOA After hoaulcp
hostmstr: 2026291200: RPC After Upload Caps
hostmstr: 2026291200: RPC Before Upload DDTR
hostmstr: 2026291200: HOA Before hoauldt
Entered hgouldt at 2012/10/18-14:56:06
NO instance DD translations were uploaded
Exiting hgouldt, rc=0 at 2012/10/18-14:56:06
hostmstr: 2026291200: HOA After hoauldt
hostmstr: 2026291200: RPC After Upload DDTR
hostmstr: 2026291200: RPC Before Begin Trans
hostmstr: 2026291200: HOA Before hoabegn
Entered hgobegn at 2012/10/18-14:56:06
tflag:0 , initial:1
hoi:0x12f094, ttid (len 27) is ...
00: 44415441 5748442E 65623465 33343931 [DATAWHD.eb4e3491]
10: 2E322E36 322E3839 363837 [.2.62.89687]
tbid (len 24) is ...
00: 44415441 5748445B 322E3632 2E383936 [DATAWHD[2.62.896]
10: 38375D5B 312E345D [87][1.4]]
Exiting hgobegn, rc=0 at 2012/10/18-14:56:08
hostmstr: 2026291200: HOA After hoabegn
hostmstr: 2026291200: RPC After Begin Trans
hostmstr: 2026291200: RPC Before Describe Table
hostmstr: 2026291200: HOA Before hoadtab
Entered hgodtab at 2012/10/18-14:56:08
count:1
table: RSCCC.SR_SPEC_PGM_SPEC_ED
Allocate hoada[0] @ 025B799C
Entered hgopcda at 2012/10/18-14:56:12
Column:1(SCH_YR): dtype:12 (VARCHAR), prc/scl:4/0, nullbl:0, octet:4, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:13
Entered hgopcda at 2012/10/18-14:56:13
Column:2(CAMPUS_ID): dtype:12 (VARCHAR), prc/scl:3/0, nullbl:0, octet:3, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:13
Entered hgopcda at 2012/10/18-14:56:14
Column:3(STU_ID): dtype:12 (VARCHAR), prc/scl:6/0, nullbl:0, octet:6, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:14
Entered hgopcda at 2012/10/18-14:56:14
Column:4(DT_ENTRY_STU): dtype:12 (VARCHAR), prc/scl:8/0, nullbl:0, octet:8, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:14
Entered hgopcda at 2012/10/18-14:56:15
Column:5(PRI_HANDI_IND): dtype:12 (VARCHAR), prc/scl:2/0, nullbl:0, octet:2, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:15
Entered hgopcda at 2012/10/18-14:56:15
Column:6(INSTRUCT_SET_CD): dtype:12 (VARCHAR), prc/scl:2/0, nullbl:0, octet:2, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:16
Entered hgopcda at 2012/10/18-14:56:16
Column:7(SPEECH_THRPY_IND): dtype:12 (VARCHAR), prc/scl:1/0, nullbl:0, octet:1, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:16
Entered hgopcda at 2012/10/18-14:56:17
Column:8(DT_WD): dtype:12 (VARCHAR), prc/scl:8/0, nullbl:0, octet:8, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:17
Entered hgopcda at 2012/10/18-14:56:17
Column:9(DT_ENTRY_STU_RECIP): dtype:12 (VARCHAR), prc/scl:8/0, nullbl:0, octet:8, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:18
Entered hgopcda at 2012/10/18-14:56:18
Column:10(WD_RSN_CD): dtype:12 (VARCHAR), prc/scl:2/0, nullbl:0, octet:2, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:18
Entered hgopcda at 2012/10/18-14:56:19
Column:11(VOC_HRS_ELIG): dtype:12 (VARCHAR), prc/scl:1/0, nullbl:0, octet:1, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:19
Entered hgopcda at 2012/10/18-14:56:19
Column:12(REG_DAY_SCH_PGM_DEAF): dtype:12 (VARCHAR), prc/scl:1/0, nullbl:0, octet:1, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:19
The hoada for table RSCCC.SR_SPEC_PGM_SPEC_ED follows...
hgodtab, line 904: Printing hoada @ 025B799C
MAX:12, ACTUAL:12, BRC:1, WHT=6 (TABLE_DESCRIBE)
hoadaMOD bit-values found (0x200:TREAT_AS_CHAR)
DTY NULL-OK LEN MAXBUFLEN PR/SC CST IND MOD NAME
12 VARCHAR N 4 4 0/ 0 0 0 200 SCH_YR
12 VARCHAR N 3 3 0/ 0 0 0 200 CAMPUS_ID
12 VARCHAR N 6 6 0/ 0 0 0 200 STU_ID
12 VARCHAR N 8 8 0/ 0 0 0 200 DT_ENTRY_STU
12 VARCHAR N 2 2 0/ 0 0 0 200 PRI_HANDI_IND
12 VARCHAR N 2 2 0/ 0 0 0 200 INSTRUCT_SET_CD
12 VARCHAR N 1 1 0/ 0 0 0 200 SPEECH_THRPY_IND
12 VARCHAR N 8 8 0/ 0 0 0 200 DT_WD
12 VARCHAR N 8 8 0/ 0 0 0 200 DT_ENTRY_STU_RECIP
12 VARCHAR N 2 2 0/ 0 0 0 200 WD_RSN_CD
12 VARCHAR N 1 1 0/ 0 0 0 200 VOC_HRS_ELIG
12 VARCHAR N 1 1 0/ 0 0 0 200 REG_DAY_SCH_PGM_DEAF
Exiting hgodtab, rc=0 at 2012/10/18-14:56:22
hostmstr: 2026291200: HOA After hoadtab
hostmstr: 2026291200: HOA Before hoadafr
Entered hgodafr, cursor id 0 at 2012/10/18-14:56:23
Free hoada @ 025B799C
Exiting hgodafr, rc=0 at 2012/10/18-14:56:23
hostmstr: 2026291200: HOA After hoadafr
hostmstr: 2026291200: RPC After Describe Table
hostmstr: 2026291200: RPC Before Describe Table
hostmstr: 2026291200: HOA Before hoadtab
Entered hgodtab at 2012/10/18-14:56:23
count:1
table: RSCCC.SR_DISCPLN
Allocate hoada[0] @ 025B799C
Entered hgopcda at 2012/10/18-14:56:27
Column:1(SCH_YR): dtype:12 (VARCHAR), prc/scl:4/0, nullbl:0, octet:4, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:28
Entered hgopcda at 2012/10/18-14:56:28
Column:2(STU_ID): dtype:12 (VARCHAR), prc/scl:6/0, nullbl:0, octet:6, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:28
Entered hgopcda at 2012/10/18-14:56:29
Column:3(OFENS_STAMP): dtype:12 (VARCHAR), prc/scl:27/0, nullbl:0, octet:27, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:29
Entered hgopcda at 2012/10/18-14:56:29
Column:4(OFENS_TIME): dtype:12 (VARCHAR), prc/scl:8/0, nullbl:0, octet:8, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:29
Entered hgopcda at 2012/10/18-14:56:30
Column:5(CAMPUS_ID): dtype:12 (VARCHAR), prc/scl:3/0, nullbl:0, octet:3, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:30
Entered hgopcda at 2012/10/18-14:56:30
Column:6(DT_OFENS): dtype:12 (VARCHAR), prc/scl:8/0, nullbl:0, octet:8, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:30
Entered hgopcda at 2012/10/18-14:56:31
Column:7(MODIFIER): dtype:12 (VARCHAR), prc/scl:10/0, nullbl:0, octet:10, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:31
Entered hgopcda at 2012/10/18-14:56:31
Column:8(OFENS_SEMCYC): dtype:12 (VARCHAR), prc/scl:2/0, nullbl:0, octet:2, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:32
Entered hgopcda at 2012/10/18-14:56:32
Column:9(REP_BY): dtype:12 (VARCHAR), prc/scl:3/0, nullbl:0, octet:3, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:32
Entered hgopcda at 2012/10/18-14:56:33
Column:10(REP_BY_NAME_F): dtype:12 (VARCHAR), prc/scl:17/0, nullbl:0, octet:17, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:33
Entered hgopcda at 2012/10/18-14:56:33
Column:11(REP_BY_NAME_L): dtype:12 (VARCHAR), prc/scl:25/0, nullbl:0, octet:25, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:33
Entered hgopcda at 2012/10/18-14:56:34
Column:12(INC_LOC): dtype:12 (VARCHAR), prc/scl:3/0, nullbl:0, octet:3, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:34
Entered hgopcda at 2012/10/18-14:56:35
Column:13(COURSE): dtype:12 (VARCHAR), prc/scl:4/0, nullbl:0, octet:4, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:35
Entered hgopcda at 2012/10/18-14:56:35
Column:14(SECTION): dtype:12 (VARCHAR), prc/scl:2/0, nullbl:0, octet:2, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:35
Entered hgopcda at 2012/10/18-14:56:36
Column:15(CRS_TITLE): dtype:12 (VARCHAR), prc/scl:15/0, nullbl:0, octet:15, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:36
Entered hgopcda at 2012/10/18-14:56:36
Column:16(PERIOD): dtype:12 (VARCHAR), prc/scl:2/0, nullbl:0, octet:2, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:36
Entered hgopcda at 2012/10/18-14:56:37
Column:17(INSTR): dtype:12 (VARCHAR), prc/scl:3/0, nullbl:0, octet:3, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:37
Entered hgopcda at 2012/10/18-14:56:37
Column:18(PARENT_CONTACT): dtype:12 (VARCHAR), prc/scl:1/0, nullbl:0, octet:1, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:37
Entered hgopcda at 2012/10/18-14:56:38
Column:19(CONTACT_DT): dtype:12 (VARCHAR), prc/scl:8/0, nullbl:0, octet:8, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:38
Entered hgopcda at 2012/10/18-14:56:38
Column:20(CONF_REQUESTED): dtype:12 (VARCHAR), prc/scl:1/0, nullbl:0, octet:1, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:38
Entered hgopcda at 2012/10/18-14:56:39
Column:21(CONF_DATE): dtype:12 (VARCHAR), prc/scl:8/0, nullbl:0, octet:8, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:39
Entered hgopcda at 2012/10/18-14:56:39
Column:22(INFORMAL_HEARING): dtype:12 (VARCHAR), prc/scl:1/0, nullbl:0, octet:1, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:39
Entered hgopcda at 2012/10/18-14:56:40
Column:23(APPEAL_EXP): dtype:12 (VARCHAR), prc/scl:1/0, nullbl:0, octet:1, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:40
Entered hgopcda at 2012/10/18-14:56:40
Column:24(WITNESS): dtype:12 (VARCHAR), prc/scl:1/0, nullbl:0, octet:1, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:41
Entered hgopcda at 2012/10/18-14:56:41
Column:25(DISCPLN_COMM): dtype:12 (VARCHAR), prc/scl:3270/0, nullbl:0, octet:3270, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:41
Entered hgopcda at 2012/10/18-14:56:42
Column:26(ADMIN_BY): dtype:12 (VARCHAR), prc/scl:3/0, nullbl:0, octet:3, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:42
Entered hgopcda at 2012/10/18-14:56:42
Column:27(ADMIN_BY_NAME_F): dtype:12 (VARCHAR), prc/scl:17/0, nullbl:0, octet:17, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:42
Entered hgopcda at 2012/10/18-14:56:43
Column:28(ADMIN_BY_NAME_L): dtype:12 (VARCHAR), prc/scl:25/0, nullbl:0, octet:25, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:43
Entered hgopcda at 2012/10/18-14:56:43
Column:29(REPORTED_BY_DESC): dtype:12 (VARCHAR), prc/scl:60/0, nullbl:0, octet:60, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:43
Entered hgopcda at 2012/10/18-14:56:44
Column:30(INCIDENT_NUM): dtype:12 (VARCHAR), prc/scl:6/0, nullbl:0, octet:6, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:44
Entered hgopcda at 2012/10/18-14:56:44
Column:31(REPORT_PD): dtype:12 (VARCHAR), prc/scl:1/0, nullbl:0, octet:1, sign:1, radix:0
Exiting hgopcda, rc=0 at 2012/10/18-14:56:45
The hoada for table RSCCC.SR_DISCPLN follows...
hgodtab, line 904: Printing hoada @ 025B799C
MAX:31, ACTUAL:31, BRC:1, WHT=6 (TABLE_DESCRIBE)
hoadaMOD bit-values found (0x200:TREAT_AS_CHAR)
DTY NULL-OK LEN MAXBUFLEN PR/SC CST IND MOD NAME
12 VARCHAR N 4 4 0/ 0 0 0 200 SCH_YR
12 VARCHAR N 6 6 0/ 0 0 0 200 STU_ID
12 VARCHAR N 27 27 0/ 0 0 0 200 OFENS_STAMP
12 VARCHAR N 8 8 0/ 0 0 0 200 OFENS_TIME
12 VARCHAR N 3 3 0/ 0 0 0 200 CAMPUS_ID
12 VARCHAR N 8 8 0/ 0 0 0 200 DT_OFENS
12 VARCHAR N 10 10 0/ 0 0 0 200 MODIFIER
12 VARCHAR N 2 2 0/ 0 0 0 200 OFENS_SEMCYC
12 VARCHAR N 3 3 0/ 0 0 0 200 REP_BY
12 VARCHAR N 17 17 0/ 0 0 0 200 REP_BY_NAME_F
12 VARCHAR N 25 25 0/ 0 0 0 200 REP_BY_NAME_L
12 VARCHAR N 3 3 0/ 0 0 0 200 INC_LOC
12 VARCHAR N 4 4 0/ 0 0 0 200 COURSE
12 VARCHAR N 2 2 0/ 0 0 0 200 SECTION
12 VARCHAR N 15 15 0/ 0 0 0 200 CRS_TITLE
12 VARCHAR N 2 2 0/ 0 0 0 200 PERIOD
12 VARCHAR N 3 3 0/ 0 0 0 200 INSTR
12 VARCHAR N 1 1 0/ 0 0 0 200 PARENT_CONTACT
12 VARCHAR N 8 8 0/ 0 0 0 200 CONTACT_DT
12 VARCHAR N 1 1 0/ 0 0 0 200 CONF_REQUESTED
12 VARCHAR N 8 8 0/ 0 0 0 200 CONF_DATE
12 VARCHAR N 1 1 0/ 0 0 0 200 INFORMAL_HEARING
12 VARCHAR N 1 1 0/ 0 0 0 200 APPEAL_EXP
12 VARCHAR N 1 1 0/ 0 0 0 200 WITNESS
12 VARCHAR N 3270 3270 0/ 0 0 0 200 DISCPLN_COMM
12 VARCHAR N 3 3 0/ 0 0 0 200 ADMIN_BY
12 VARCHAR N 17 17 0/ 0 0 0 200 ADMIN_BY_NAME_F
12 VARCHAR N 25 25 0/ 0 0 0 200 ADMIN_BY_NAME_L
12 VARCHAR N 60 60 0/ 0 0 0 200 REPORTED_BY_DESC
12 VARCHAR N 6 6 0/ 0 0 0 200 INCIDENT_NUM
12 VARCHAR N 1 1 0/ 0 0 0 200 REPORT_PD
Exiting hgodtab, rc=0 at 2012/10/18-14:56:50
hostmstr: 2026291200: HOA After hoadtab
hostmstr: 2026291200: HOA Before hoadafr
Entered hgodafr, cursor id 0 at 2012/10/18-14:56:50
Free hoada @ 025B799C
Exiting hgodafr, rc=0 at 2012/10/18-14:56:50
hostmstr: 2026291200: HOA After hoadafr
hostmstr: 2026291200: RPC After Describe Table
hostmstr: 2026291200: RPC Before Rollback Trans
hostmstr: 2026291200: HOA Before hoaroll
Entered hgoroll at 2012/10/18-14:56:51
tflag:1 , cmt(0):
hoi:0x12f098, ttid (len 27) is ...
00: 44415441 5748442E 65623465 33343931 [DATAWHD.eb4e3491]
10: 2E322E36 322E3839 363837 [.2.62.89687]
tbid (len 24) is ...
00: 44415441 5748445B 322E3632 2E383936 [DATAWHD[2.62.896]
10: 38375D5B 312E345D [87][1.4]]
Entered hgocpctx at 2012/10/18-14:56:52
Exiting hgocpctx, rc=0 at 2012/10/18-14:56:52
Exiting hgoroll, rc=0 at 2012/10/18-14:56:52
hostmstr: 2026291200: HOA After hoaroll
hostmstr: 2026291200: RPC After Rollback Trans
Please advise and thanks.. -
How to efficiently select random rows from a large table ?
Hello,
The following code will select 5 rows out of a random set of rows from the emp (employee) table
select *
from (
select ename, job
from emp
order by dbms_random.value()
where rownum <= 5my concern is that the inner select will cause a table scan in order to assign a random value to each row. This code when used against a large table can be a performance problem.
Is there an efficient way of selecting random rows from a table without having to do a table scan ? (I am new to Oracle, therefore it is possible that I am missing a very simple way to perform this task.)
thank you for your help,
John.
Edited by: 440bx on Jul 10, 2010 6:18 PMHave a look at the SAMPLE clause of the select statement. The number in parenthesis is a percentage of the table.
SQL> create table t as select * from dba_objects;
Table created.
SQL> explain plan for select * from t sample (1);
Explained.
SQL> @xp
PLAN_TABLE_OUTPUT
Plan hash value: 2767392432
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 725 | 70325 | 289 (1)| 00:00:04 |
| 1 | TABLE ACCESS SAMPLE| T | 725 | 70325 | 289 (1)| 00:00:04 |
8 rows selected. -
Retrieve data from a large table from ORACLE 10g
I am working with a Microsoft Visual Studio Project that requires to retrieve data from a large table from Oracle 10g database and export the data into the hard drive.
The problem here is that I am not able to connect to the database directly because of license issue but I can use a third party API to retrieve data from the database. This API has sufficient previllege/license permission on to the database to perform retrieval of data. So, I am not able to use DTS/SSIS or other tool to import data from the database directly connecting to it.
Here my approach is...first retrieve the data using the API into a .net DataTable and then dump the records from it into the hard drive in a specific format (might be in Excel file/ another SQL server database).
When I try to retrieve the data from a large table having over 13 lacs records (3-4 GB) in a data table using the visual studio project, I get an Out of memory exception.
But is there any better way to retrieve the records chunk by chunk and do the export without loosing the state of the data in the table?
Any help on this problem will be highly appriciated.
Thanks in advance...
-Jahedur Rahman
Edited by: Jahedur on May 16, 2010 11:42 PMGirish...Thanks for your reply...But I am sorry for the confusions. Let me explain that...
1."export the data into another media into the hard drive."
What does it mean by this line i.e. another media into hard drive???
ANS: Sorry...I just want to write the data in a file or in a table in SQL server database.
2."I am not able to connect to the database directly because of license issue"
huh?? I never heard this question that a user is not able to connect the db because of license. What error / message you are getting?
ANS: My company uses a 3rd party application that uses ORACLE 10g. And my compnay is licensed to use the 3rd party application (APP+Database is a package) and did not purchased ORACLE license to use directly. So I will not connect to the database directly.
3.I am not sure which API is you are talking about, but i am running an application of the visual studio data grid or similar kind of controls; in which i can select (select query) as many rows as i needed; no issue.
ANS: This API is provided by the 3rd party application vendor. I can pass a query to it and it returns a datatable.
4."better way to retrieve the records chunk by chunk and do the export without loosing the state of the data in the table?"
ANS: As I get a system error (out of memory) when I select all rows in a datatable at a time, I wanted to retrieve the data in multiple phases.
E.g: 1 to 20,000 records in 1st phase
20,001 to 40,000 records in 2nd phase
40,001 to ...... records in 3nd phase
and so on...
Please let me know if this does not clarify your confusions... :)
Thanks...
-Jahedur Rahman
Edited by: user13114507 on May 12, 2010 11:28 PM -
Java.lang.OutOfMemory error while retrieving data from a large table
Hi,
i am trying to fetch data using "executeQuery()" into a ResultSet from the database. But since the data in that table is large. i am recieving "java.lang.OutOfMemory" Error. So, to resolve that, i have used "setMaxRows()" for my statement object. This resolved the error but i don't recieve the entire data. If i call "executeQuery()" again, i recieve the same data. I don't even know a filtering criterion where by i can filter the data for each "executeQuery()"..
How can i resolve this problem
Thanx in advance
--ChaitanyaEither use some criteria you develop related to one of the keys on the table or use some sort of record limiting method.
Note the method of limiting will vary related to the database you are using. You will have to look at the documentation.
For example I am told this will work in MySQL to get 200 records starting at record 100.
SELECT * FROM myTable ORDER BY whatever ASC LIMIT 100,200
Because you are running out of memroy I assume the table is large,
I am not sure what the impact of the above will have on performance because if in the above if the order by is not based on an index at the server level all the records will be selected and sorted before the records are limited.
I would make sure you have an appropriate index.
If you use the advanced search over the user forums using "resultset paging" and possibility the database you are using you should be able to get some ideas.
I hope this makes sense to you.
rykk -
Forcing a user to only select from Parameter LOV list
Hi,
I suspect that the answer is no but I'd appreciate clarification on the matter. I am wondering if there is any way to prevent a user entering a value into a parameter field - I want them to only select from the parameter's drop down LOV list. This will apply to Discoverer Viewer but would like to know if it could be done for Plus ( or not ) as well,
Kevin.Hi Kevin
Try changing the item class property called Require user to always search for values.
According to my notes: This is unchecked by default. If you check this, Discoverer will launch the Search dialog box whenever a user clicks on the list of values. Should you have a large list of values, you may want to consider turning on this optionto making the LOV box pop up automatically.
I am not convinced this will make the pop-up come up but it's worth a try.
I'd be interested in hearing how you get on.
Best wishes
Michael -
Select from refcursor in PL/SQL pkg
Greetings,
I would like to be able to return a set of values in some sort of temporary structure(temp table, cursor..it don't matter) to be used as part of the WHERE clause in several other queries. Normally I would just use a query in each of the other queries like so:
SELECT a.1, a.2, a.3
FROM bigtbl a,
(SELECT x, y, z FROM criteriakeys)b
WHERE a.x = b.x AND
a.y = b.y AND
a.z = b.z
SELECT c.1, c.2, c.3
FROM bigrtbl c,
(SELECT x, y, z FROM criteriakeys)b
WHERE c.x = b.x AND
c.y = b.y AND
c.z = b.z
etc...
The problem is that the query(b in the above) is a complex one that will be hitting against a very large number of records. I understand that temp tables are not supposed to be needed in Oracle. I haven't in 10 years and until now did not even know that it was possible. So I would use a refcursor BUT I don't know how to use a refcursor as the base of a SELECT something like:
rc ref cursor;
getcriteriakeys(rc); -- send in a refcursor and come back full of records
SELECT a.1, a.2, a.3
FROM bigtbl a,
(SELECT x, y, z FROM rc)b
WHERE a.x = b.x AND
a.y = b.y AND
a.z = b.z
This seems, to me anyway, to be an obvious use of refcursor but I cannot find any mention of it either way. I can infer from the way refcursors are used that they only send back a row at a time. This is nice but would produce sub-optimal results.
Any suggestions or comments will be welcome.
ThanksIf query b is really complex, takes a "while" to execute (long enough that you don't want to do it more than once in a set of statements), and not straightforward as in your example, one option would be to load the results of query b into a global temporary table - and then join to that table in your successive query.
A ref cursor is not really applicable in this context.
The routine would be something like:
insert into gtt
select * from ...; -- complex query b
-- the selects below would either be cursor for loops or single row select/into
-- (couldn't tell from your sample)
select *
from a, gtt
where a.x = gtt.x
and ...;
select *
from b, gtt
where b.x = gtt.x
and ...; -
Help selecting from records with duplicate fields
Test_table2 is shown below with the sql to create it.
I need to:
Identify duplicate address_keys
Within that set of duplicate address_keys, select only if HH_Income is the same between the two records and the HH_Age difference is less than 10
Now the duplicate set matches the necessary criteria, and I want to select from these duplicate address_keys, only the one with the most recent verification date
The purpose of this is to infer cohabiting couples. In the large set of data we receive, for each HH_key if the people have the same last name they are all listed under the same HH_Key, but if they do not have same last name but live together, they will be listed as separate households (HH_key) but with the same address_key. A further validator is if each HH_key reports the same HH_Income and if they are close in age. We then only want to mail to one of the people, so we choose the one that has the most recent verification date.
The result I would expect here, using the table I provided, would be
HH_Key
Address_Key
HH_Type
HH_Income
Age
Verification_Date
1234
1111
10
6
50
10-Jun-13
Can you help?
HH_Key
Address_Key
HH_Type
HH_Income
Age
Verification_Date
1234
1111
10
6
50
10-Jun-13
5678
1111
11
6
49
15-Jun-12
5544
2222
10
6
65
10-Apr-13
7788
1111
3
3
25
10-Jun-13
9898
3333
10
6
45
18-Jun-13
CREATE TABLE test_table2
(HH_key varchar(20),
address_key varchar(20),
HH_type varchar(2),
HH_Income varchar(2),
HH_age varchar(2),
Verification_date Date
INSERT INTO test_table2
(HH_Key, Address_key, HH_Type, HH_Income, HH_Age, Verification_date)
VALUES
(1234, 1111, 10, 6, 50, '10-Jun-13');
INSERT INTO test_table2
(HH_Key, Address_key,HH_Type, HH_Income, HH_Age, Verification_date)
VALUES
(5678, 1111, 11, 6, 49, '15-Jun-12');
INSERT INTO test_table2
(HH_Key, Address_key,HH_Type, HH_Income, HH_Age, Verification_date)
VALUES
(5544, 2222, 10, 6, 65, '10-Apr-13');
INSERT INTO test_table2
(HH_Key, Address_key,HH_Type, HH_Income, HH_Age, Verification_date)
VALUES
(7788, 1111, 3, 3, 25, '10-Jun-13');
INSERT INTO test_table2
(HH_Key, Address_key,HH_Type, HH_Income, HH_Age, Verification_date)
VALUES
(9898, 3333, 10, 6, 45, '18-Jun-13');I really like the results this gave, because it allows me to create a view now that contains the pair in one record. Thanks for this response. I think I did post the result that I was looking for which would be the one record that we would mail to: Once I create the view with one record for each pair, then I would just need to select the verification date that was greatest with the related data. Would that be the best approach to take? The fact that I now have a single record for each pair is excellent!
The result I would expect here, using the table I provided, would be
HH_Key
Address_Key
HH_Type
HH_Income
Age
Verification_Date
1234
1111
10
6
50
10-Jun-13 -
I need to create and copy data from a remote Oracle server to a local server. The command I use is
create table X as ( select * from X@remote_server )
with remote_server is the tns name of the remote Oracle server.
The local table is created and populated with data as expected but when I check the structure using 'desc X' it shows me all the CHAR fields of the local table are triple as large as of the remote table.
I guess the problem is the difference between the NLS_CHARACTERSET settings . The local charset is AL32UTF8 and the remote is WE8MSWIN1252.
How do I change the command to make the two tables have the same field sizes ?
Thanks,
VuDo you want to be able to store all the data from the remote table in the local table? Assuming you do, increasing the size of the column would be the correct behavior.
By default, a VARCHAR2(10) allocates up to 10 bytes of storage. In the Windows-1252 character set on the source, 1 character requires 1 byte of storage. So a VARCHAR2(10) has space for up to 10 characters. In the UTF-8 character set on the destination, however, 1 character can require up to 3 bytes of storage. So a VARCHAR2(10) may allow you to store as few as 3 characters. Since Oracle has no way of knowing what data you have (and will have) in the source system, it triples the size to ensure that all the data from the remote system will fit in the new table. If you want the columns to be the same size on the destination that they are on the source, it is highly probable that you'll get errors inserting the data because at least one value will be too large for the destination column.
Justin -
Hi
The below mentioned piece of code is throwing a dump in Production system.
Is there any way of alternate selection.
IF NOT it_pos[] IS INITIAL.
SELECT * FROM bseg INTO
TABLE it_bseg
FOR ALL ENTRIES IN it_pos
WHERE bukrs IN dd_bukrs
AND belnr = it_pos-belnr
AND gjahr = it_pos-gjahr.
ENDIF.
Regards
Subin.SI think you need to post a few more details of the environment and error to assist people in offering solution.
Questions I think of are:
- What type of dump are you getting? - Out of Time? Out of Memory? Other?
- How many entries are in internal table it_pos[] before the BSEG select starts?
- How many entries are in BSEG approximately?
- Do you need all the BSEG records to be selected into an internal table at once or can you process them in smaller sets?
- Do you have a number of other large internal tables in your program? Can any of these be cleared to free up memory?
- can you run SQL trace (ST05) against the program to see the execution time for each fetch from BSEG and how many records each fetch returns?
- why are you doing two selects from BSEG? What is the difference between them? normally better to get all the fields at the same time instead of selecting twice.
- in Development or Test where there is less data, does tha program run OK? If so, can you run it in SE30 to see what that transaction highlights as performance or similar issues?
- what table or tables do you fill table it_pos[] from? are there any duplicate records in this internal table?
- what SAP version are you running? 32 bit or 64 bit? What database?
To solve an issue like this with a program these and probably dozens of other questions must be asked and answered - and as the person on the site you are the only one able to get the answers.
Posting more details will help forum readers to evaluate the issue in light of their experience and to provide further suggestions. The more information you can give - the more likely that someone will be able to answer.
thanks
Andrew -
System getting hanged whilst using Insert into table select * from table
I have a peculiar problem.
I am using the below statements:
Query 1:
insert into table ppms.erin_out@ppms_dblink select * from erin_out;
Query 2:
insert into table ppms.erin_out@ppms_dblink values(23,'dffgg',12',dfdfdgg,dfdfdg);
I am in 'interfaces' schema (testing server) and executing above statements. We have testing server and development server, both are identical, i.e one is clone of the other.
ppms_dblink is created in interfaces schema. ppms_dblink points to different database server which has two schemas 'clarity' and 'ppms'. ppms_dblink is create through authentication details of clarity schema.
erin_out table is created on ppms schema on the same dababase server pointed by ppms_dblink.
Question is :
TOAD hangs while running query 1.
Query 2 is working perfectly.
As I have pl/sql script which is using query 1. I want to know why query 1 is creating problem.
If I use query 2 in my pl/sql query then it may create performance issue as i have to use cursor then.
On clarity schema, I have insert, update, select, modify rights on ppms.erin_out.
I have tried same queries from another database server.
That is I tried queries from 'interfaces' schema of development server ( clone of the testing server ). Its working perfectly.
Message was edited by:
user484158Dhanchik:
The table from which I select rows, to insert into table on dblink, is having only one record. It may contatin maximum 100 rows at a time because I am scheduling the procedure through daemon process. Anyway transaction is not more than 100 records. I am trying with just 1 record for testing.
So 1) Problem is not about the cost, TOAD is getting hanged ( to insert 1 record, cost does not mean much)
2) there is no large amount of data, so no question of deteriorated performance
Aron Tunzi:
I think that should not be problem, because I am able to insert a record through query 2.
Warren Tolentino :
I am testing with 1 record only. Its not performance issue.
Message was edited by:
रचित -
Is it possible to delete data selectively from Business content cubes
Dear Experts,
Requesting you to help me out to know, is it possible to delete data selectively from Business content cubes.
When I'm trying to delete selectively from Business content cubes, the background job gets cancelled with ST22 logs stating
A RAISE statement in the program "SAPLRSDRD" raised the exception condition "X_MESSAGE".
Since the exception was not intercepted by a superior program, processing was terminated.
and i tried with few more Technical content cubes but the same thing happens.
Pls let me know how to selectively delete data from Business content cubes if it's possible?.
Thanks in advance for your favorable assistance.
Regards,
Ramesh-Kumar.Hi Ramesh,
Follow below steps for selective deletion:
1. Transaction code: Use the Transaction code DELETE_FACTS.
2. Generate selective deletion program:
A report program will be generated of the given name, here .
3. Selection screen:
Take the deletion program u201CZDEL_EPBGu201D to the transaction code SE38 to see/execute the program.
After executing it will take you to a selection screen:
As we need to carry out deletion selective on Calendar week, we need to get the screen field for the field Calendar week. For this, click on the Calendar week field and press F1.
Click on the technical information button (marked in red box above) you will get below screen:
ABAP program to carry out the Calendar week calculation
Problem scenario: As stated earlier the requirement is to delete the data from the cube based on the calendar week. Thus a code must be developed such that the number of weeks should be taken as input and corresponding calendar week should be determined. This calendar week should be then passed to the deletion program in order to carry out the data deletion from the InfoCube.
Transaction code: Use T-code SE38 in order to create a program.
Logic: Suppose we need to delete the data older than 100 weeks.
a. Get the number of weeks and system date in variables and calculate the total number of days :
lv_week = 100. *number of weeks
lv_dte = sy-datum. *system date
v_totaldays = lv_week * 7. *total days
b. Get the corresponding calendar day from the total days. This is obtained by simply subtracting the total no. of days from the system date.
lv_calday = lv_dte - v_totaldays. *corresponding calday.
c. Now in order to get the calendar week corresponding to the calculated calendar day we must call a function module 'DATE_TO_PERIOD_CONVERT'. This function module takes input as Calendar day and Fiscal year variant and returns the appropriate fiscal period.
Get the sales week time elements
call function 'DATE_TO_PERIOD_CONVERT'
exporting
i_date = lv_calday
i_periv = lc_sales
importing
e_buper = lv_period
e_gjahr = lv_year
exceptions
input_false = 1
t009_notfound = 2
t009b_notfound = 3.
if sy-subrc = 0.
ls_time-calweek(4) = lv_year.
ls_time-calweek+4(2) = lv_period.
endif.
v_week = ls_boots_time-calweek.
Note: We can pass the fiscal year variant which can be obtained from the table T009B.For e.g. here fiscal year variant lc_sales = Z2. LS_TIME will be any table with suitable time units.
d. Now we have obtained the required calendar week in the v_week variable. This calendar week is the week till which we need to keep the data. And older data than this week will be deleted. This deletion will be done by the deletion program
Submitting the Data deletion program for ZEPBGC01 and key field
SUBMIT ZDEL_EPBG WITH C039 LT v_week.
Here the calendar week value is submitted to the deletion program ZDEL_EPBG with the screen field of calendar week.
Hope ... this will help you..
Thanks,
Jitendra -
Abap Gurus,
I am fetching belnr dmbtr buzei hkont from bseg
for all entries from one table say 'TAB'
where hkont = TAB-hkont
and koart = 'S'.
but in in the code inspector check it;s showing error
The message is
"Large table BSEG: No field of a table index in WHERE"
How to optimize a fetch from a larger cluster table BSEG
rewards if useful.
Thanks in advanceHi,
Alternatives to Reading BSEG (Accounting Document Segment).
Since performance is an issue if reading data from BSEG table ( being a cluster table ), maybe you would
consider using the tables:
BSAD Accounting : Secondary Index for Customers (Cleared Items)
BSAK Accounting : Secondary Index for Vendors (Cleared Items)
BSAS Accounting : Secondary Index for G/L Accounts (Cleared Items)
BSID Accounting : Secondary Index for Customers
BSIK Accounting : Secondary Index for Vendors
BSIS Accounting : Secondary Index for G/L Accounts
instead of BSEG.
It depends on what your program has to select (if you're only looking for customers
you can use BSID and BSAD etc.)
These are normal database tables, not clusters. Normally every record from BSEG
can be found back in one of these 6 tables and a program which selects data from
these tables runs faster than from BSEG.
Reward points if helpful
Thanks
Shambhu
Maybe you are looking for
-
How do I change a printer's IP address, !again! :(
O Great, Knowledgeable and Kind printer gurus, I'm in a similar situation to Peter Minter's old archived thread at http://discussions.apple.com/thread.jspa?messageID=2188292? ... So I'm sorry to be the next confused person Sorry to be asking ask a q
-
Please remove remaining 255 character API limitations
I develop a library for creating .xll add-ins using the Excel C API (as described in the Excel SDK: http://msdn.microsoft.com/en-us/library/office/bb687883(v=office.15).aspx ). However, this discussion also applies to the COM Automation interfaces, a
-
I recently migrated from MacBook Pro to MacBook Pro retina. Now my macbook will not print to my LaserJet3200 printer. I downloaded the drivers but still only prints a line of gibberish per page, and endless pages, when i send printer a print job. Any
-
When opening a new tab how do I set the cursor to appear in the search bar?
New tab search
-
Posting PO through idoc PORDCR101
HI Through this Idoc PORDCR101 trying to create PO in the system. Need to change the calculated price getting fetched in PB00 condition type with the legacy value while creating PO. trying to upload the legacy PO with their old price ,but the po