Query on a table with indexed date field
I have a table with a date column which is indexed. If I run a query like "select column1 where date_field='20-JAN-04' for example it is fast and uses index.
If I run select column1 where date < '20-JAN-04' it is slow and doesnt use the index. I logged a TAR and Oracle told me that this is to be expected as not using the index in this case is the most effiecient way of doing the query.
Now my concept of an index is like the index of Yellow Pages(telephone directory) for example. In this example if I look for a name that is say "Halfords" or below, I can see all entries for Halfords and all the way to ZZZ in one block.
I just cant see , in a common sense way why Oracle wont use the index in this type of query.
George
Using the concept of a telephone directory is wrong. In a telephone directory you have all information order by the name. However in your table (if it is not an IOT) you don't have all information/rows ordered by your date_field. Rather think at the document "Oracle9i Database Concepts" and it's index.
Let's say you want to find all indexed words larger then "ISO SQL standard" (ok that doesn't make sense but it is just an example). So would it be faster to read the whole document or to lookup each word in the index and then read the entire page (Oracle block) to find the word.
It's not allways easy to know in advance if the query will be faster over the index or a full table scan. what you need to do is to well analyze (dbms_stats) the table and it's index, in most cases Oracle chooses the right way. You may also use the hint /*+ index(table_name index_name) */ and will see if it would be faster over the index or not.
A good document about that subject is:
http://www.ioug.org/tech/IOUGinDefense.pdf
HTH
Maurice
Similar Messages
-
SOA 11g DBAdapter Polling using a Sequencing Table with a DATE Field
Hi.
I have implemented a polling solution using a sequencing table that references a DATE column. For the most part the poll works correctly, but on occasion, I am not seeing a BPEL process instantiation for various records that get created in my source table.I have also noticed that in this particular case, the DATE field content on one or more source records is exactly the same --> "YYYYMMDDhhmmss".
I just want to confirm here, that the polling adapter should be able to pick up multiple records from the source table, even though the date field is exactly the same. As well, how should I go about trying to debug this issue, if point #1 is handled by the system. Is there is a specific log or trace file that I can look at. Also, could I be facing a timing issue ?
Can someone please comment on this.
Thanks.In my table, I don't see a primary key, but only a unique index. I would assume then, that this would be analogous to the primary key capture; noticed that currently this code was only using one column out of two, from the unique index itself.
Also, is the date content issue that I mentioned above a moot point then, based upon not having the correct primary key/unique key data pointers established ?
Thanks. -
R4 EA: Error loading a table with an XMLTYPE field
When I try to view the data in a table with an XMLTYPE field I get the following error in R4 EA Version 4.0.0.12 Build MAIN-12-27. This works in 2.2 and 3.0.
oracle.sqldeveloper.migration.application
Error: Resource not found: ${SCRATCH_COMMAND_ICON}.
Double clicking on the error opens the EXTENSION.XML file and shows this line:
<trigger-hooks xmlns="http://xmlns.oracle.com/ide/extension">
<!-- Add registry here if required -->
<triggers xmlns:c="http://xmlns.oracle.com/ide/customization">
<actions xmlns="http://xmlns.oracle.com/jdeveloper/1013/extension">
<action id="MigrationProject.ApplicationScan">
<properties>
<property name="Name">${APPSCAN_TITLE}</property>
<property name="MnemonicKey">${APPSCAN_TITLE2}</property>
<property name="SmallIcon">res:${SCRATCH_COMMAND_ICON}</property>
</properties>
</action>
</actions>
The table has the following definition:
ID NUMBER(38,0) No
WS_DATA XMLTYPE Yes
WS_SNAPSHOT_ID NUMBER(38,0) No
Any help would be greatly appreaciated. We have to upgrade to 4.0 because our security team will no longer allow Java 6 on any server or workstation.
Thanks,
SteveHi Steve
Still no response?
I am having same issue when querying a table with XMLTYPE.
How is your XMLTYPE stored in the DB, as a CLOB or as a BINARY XML?
Regards,
Shaun -
Weird errors with a DATE field in a certain row
We have a table with an exp_date field of DATE datatype. One row is giving us mysterious problems.
If I query the row in sqlplus, the exp_date field displays 18-Jun-04. The same query in Oracle SQL Developer displays 19-Jun-04. The same query in TOAD returns a NULL field, and adding "exp_date IS NULL" to the query returns no rows (since it really isn't NULL!).
In sqlplus, if I use trunc() or round() on exp_date, it gets bumped up to 19-Jun-04.
This field for this record is causing some issues in our java classes as well, we're adding some more logging/exception handling to find out exactly what they are.
Any thoughts?
FWIW this is on Oracle 10gR2 (10.2.0.2).
Message was edited by:
dtseilerwhat happens if you do the following on all three different environments:
SQL> select to_char(exp_date, 'dd-mon-yyyy hh24:mi:ss') from your_table where your_condition ;
[pre] -
How to select data from a table using a date field in the where condition?
How to select data from a table using a date field in the where condition?
For eg:
data itab like equk occurs 0 with header line.
select * from equk into table itab where werks = 'C001'
and bdatu = '31129999'.
thanks.Hi Ramesh,
Specify the date format as YYYYMMDD in where condition.
Dates are internally stored in SAP as YYYYMMDD only.
Change your date format in WHERE condition as follows.
data itab like equk occurs 0 with header line.
select * from equk into table itab where werks = 'C001'
and bdatu = <b>'99991231'.</b>
I doubt check your data base table EQUK on this date for the existince of data.
Otherwise, just change the conidition on BDATU like below to see all entries prior to this date.
data itab like equk occurs 0 with header line.
select * from equk into table itab where werks = 'C001'
and <b> bdatu <= '99991231'.</b>
Thanks,
Vinay
Thanks,
Vinay -
Hi,
I have a table with many LONG fields (28). So far, everythings works fine.
However, if I add another LONG field I cannot insert a dataset anymore
(29 LONG fields).
Does there exist a MaxDB parameter or anything else I can change to make inserts possible again?
Thanks in advance
Michael
appendix:
- Create and Insert command and error message
- MaxDB version and its parameters
Create and Insert command and error message
CREATE TABLE "DBA"."AZ_Z_TEST02"
"ZTB_ID" Integer NOT NULL,
"ZTB_NAMEOFREPORT" Char (400) ASCII DEFAULT '',
"ZTB_LONG_COMMENT" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_00" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_01" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_02" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_03" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_04" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_05" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_06" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_07" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_08" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_09" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_10" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_11" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_12" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_13" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_14" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_15" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_16" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_17" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_18" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_19" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_20" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_21" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_22" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_23" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_24" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_25" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_26" LONG ASCII DEFAULT '',
PRIMARY KEY ("ZTB_ID")
The insert command
INSERT INTO AZ_Z_TEST02 SET ztb_id = 87
works fine. If I add the LONG field
"ZTB_LONG_TEXTBLOCK_27" LONG ASCII DEFAULT '',
the following error occurs:
Auto Commit: On, SQL Mode: Internal, Isolation Level: Committed
General error;-7032 POS(1) SQL statement not allowed for column of data type LONG
INSERT INTO AZ_Z_TEST02 SET ztb_id = 88
MaxDB version and its parameters
All db params given by
dbmcli -d myDB -u dbm,dbm param_directgetall > maxdb_params.txt
are
KERNELVERSION KERNEL 7.5.0 BUILD 026-123-094-430
INSTANCE_TYPE OLTP
MCOD NO
RESTART_SHUTDOWN MANUAL
SERVERDBFOR_SAP YES
_UNICODE NO
DEFAULT_CODE ASCII
DATE_TIME_FORMAT INTERNAL
CONTROLUSERID DBM
CONTROLPASSWORD
MAXLOGVOLUMES 10
MAXDATAVOLUMES 11
LOG_VOLUME_NAME_001 LOG_001
LOG_VOLUME_TYPE_001 F
LOG_VOLUME_SIZE_001 64000
DATA_VOLUME_NAME_0001 DAT_0001
DATA_VOLUME_TYPE_0001 F
DATA_VOLUME_SIZE_0001 64000
DATA_VOLUME_MODE_0001 NORMAL
DATA_VOLUME_GROUPS 1
LOG_BACKUP_TO_PIPE NO
MAXBACKUPDEVS 2
BACKUP_BLOCK_CNT 8
LOG_MIRRORED NO
MAXVOLUMES 22
MULTIO_BLOCK_CNT 4
DELAYLOGWRITER 0
LOG_IO_QUEUE 50
RESTARTTIME 600
MAXCPU 1
MAXUSERTASKS 50
TRANSRGNS 8
TABRGNS 8
OMSREGIONS 0
OMSRGNS 25
OMS_HEAP_LIMIT 0
OMS_HEAP_COUNT 1
OMS_HEAP_BLOCKSIZE 10000
OMS_HEAP_THRESHOLD 100
OMS_VERS_THRESHOLD 2097152
HEAP_CHECK_LEVEL 0
ROWRGNS 8
MINSERVER_DESC 16
MAXSERVERTASKS 20
_MAXTRANS 288
MAXLOCKS 2880
LOCKSUPPLY_BLOCK 100
DEADLOCK_DETECTION 4
SESSION_TIMEOUT 900
OMS_STREAM_TIMEOUT 30
REQUEST_TIMEOUT 5000
USEASYNC_IO YES
IOPROCSPER_DEV 1
IOPROCSFOR_PRIO 1
USEIOPROCS_ONLY NO
IOPROCSSWITCH 2
LRU_FOR_SCAN NO
PAGESIZE 8192
PACKETSIZE 36864
MINREPLYSIZE 4096
MBLOCKDATA_SIZE 32768
MBLOCKQUAL_SIZE 16384
MBLOCKSTACK_SIZE 16384
MBLOCKSTRAT_SIZE 8192
WORKSTACKSIZE 16384
WORKDATASIZE 8192
CATCACHE_MINSIZE 262144
CAT_CACHE_SUPPLY 1632
INIT_ALLOCATORSIZE 229376
ALLOW_MULTIPLE_SERVERTASK_UKTS NO
TASKCLUSTER01 tw;al;ut;2000sv,100bup;10ev,10gc;
TASKCLUSTER02 ti,100dw;30000us;
TASKCLUSTER03 compress
MPRGN_QUEUE YES
MPRGN_DIRTY_READ NO
MPRGN_BUSY_WAIT NO
MPDISP_LOOPS 1
MPDISP_PRIO NO
XP_MP_RGN_LOOP 0
MP_RGN_LOOP 0
MPRGN_PRIO NO
MAXRGN_REQUEST 300
PRIOBASE_U2U 100
PRIOBASE_IOC 80
PRIOBASE_RAV 80
PRIOBASE_REX 40
PRIOBASE_COM 10
PRIOFACTOR 80
DELAYCOMMIT NO
SVP1_CONV_FLUSH NO
MAXGARBAGECOLL 0
MAXTASKSTACK 1024
MAX_SERVERTASK_STACK 100
MAX_SPECIALTASK_STACK 100
DWIO_AREA_SIZE 50
DWIO_AREA_FLUSH 50
FBM_VOLUME_COMPRESSION 50
FBM_VOLUME_BALANCE 10
FBMLOW_IO_RATE 10
CACHE_SIZE 10000
DWLRU_TAIL_FLUSH 25
XP_DATA_CACHE_RGNS 0
DATACACHE_RGNS 8
XP_CONVERTER_REGIONS 0
CONVERTER_REGIONS 8
XP_MAXPAGER 0
MAXPAGER 11
SEQUENCE_CACHE 1
IDXFILELIST_SIZE 2048
SERVERDESC_CACHE 73
SERVERCMD_CACHE 21
VOLUMENO_BIT_COUNT 8
OPTIM_MAX_MERGE 500
OPTIM_INV_ONLY YES
OPTIM_CACHE NO
OPTIM_JOIN_FETCH 0
JOIN_SEARCH_LEVEL 0
JOIN_MAXTAB_LEVEL4 16
JOIN_MAXTAB_LEVEL9 5
READAHEADBLOBS 25
RUNDIRECTORY E:\_mp\u_v_dbs\EVERW_C5
_KERNELDIAGFILE knldiag
KERNELDIAGSIZE 800
_EVENTFILE knldiag.evt
_EVENTSIZE 0
_MAXEVENTTASKS 1
_MAXEVENTS 100
_KERNELTRACEFILE knltrace
TRACE_PAGES_TI 2
TRACE_PAGES_GC 0
TRACE_PAGES_LW 5
TRACE_PAGES_PG 3
TRACE_PAGES_US 10
TRACE_PAGES_UT 5
TRACE_PAGES_SV 5
TRACE_PAGES_EV 2
TRACE_PAGES_BUP 0
KERNELTRACESIZE 648
EXTERNAL_DUMP_REQUEST NO
AKDUMP_ALLOWED YES
_KERNELDUMPFILE knldump
_RTEDUMPFILE rtedump
UTILITYPROTFILE dbm.utl
UTILITY_PROTSIZE 100
BACKUPHISTFILE dbm.knl
BACKUPMED_DEF dbm.mdf
MAXMESSAGE_FILES 0
EVENTALIVE_CYCLE 0
_SHAREDDYNDATA 10280
_SHAREDDYNPOOL 3607
USE_MEM_ENHANCE NO
MEM_ENHANCE_LIMIT 0
__PARAM_CHANGED___ 0
__PARAM_VERIFIED__ 2008-05-13 13:47:17
DIAG_HISTORY_NUM 2
DIAG_HISTORY_PATH E:\_mp\u_v_dbs\EVERW_C5\DIAGHISTORY
DIAGSEM 1
SHOW_MAX_STACK_USE NO
LOG_SEGMENT_SIZE 21333
SUPPRESS_CORE YES
FORMATTING_MODE PARALLEL
FORMAT_DATAVOLUME YES
HIRES_TIMER_TYPE CPU
LOAD_BALANCING_CHK 0
LOAD_BALANCING_DIF 10
LOAD_BALANCING_EQ 5
HS_STORAGE_DLL libhsscopy
HS_SYNC_INTERVAL 50
USE_OPEN_DIRECT NO
SYMBOL_DEMANGLING NO
EXPAND_COM_TRACE NO
OPTIMIZE_OPERATOR_JOIN_COSTFUNC YES
OPTIMIZE_JOIN_PARALLEL_SERVERS 0
OPTIMIZE_JOIN_OPERATOR_SORT YES
OPTIMIZE_JOIN_OUTER YES
JOIN_OPERATOR_IMPLEMENTATION IMPROVED
JOIN_TABLEBUFFER 128
OPTIMIZE_FETCH_REVERSE YES
SET_VOLUME_LOCK YES
SHAREDSQL NO
SHAREDSQL_EXPECTEDSTATEMENTCOUNT 1500
SHAREDSQL_COMMANDCACHESIZE 32768
MEMORY_ALLOCATION_LIMIT 0
USE_SYSTEM_PAGE_CACHE YES
USE_COROUTINES YES
MIN_RETENTION_TIME 60
MAX_RETENTION_TIME 480
MAX_SINGLE_HASHTABLE_SIZE 512
MAX_HASHTABLE_MEMORY 5120
HASHED_RESULTSET NO
HASHED_RESULTSET_CACHESIZE 262144
AUTO_RECREATE_BAD_INDEXES NO
LOCAL_REDO_LOG_BUFFER_SIZE 0
FORBID_LOAD_BALANCING NO>
Lars Breddemann wrote:
> Hi Michael,
>
> this really looks like one of those "Find-the-5-errors-in-the-picture" riddles to me.
> Really.
>
> Ok, first to your question: this seems to be a bug - I could reproduce it with my 7.5. Build 48.
> Anyhow, when I use
>
> insert into "AZ_Z_TEST02" values (87,'','','','','','','','','','','','','','','',''
> ,'','','','','','','','','','','','','','','','')
>
> it works fine.
It solves my problem. Thanks a lot. -- I hardly can believe that this is all needed to solve the bug. This may be the reason why I have not given it a try.
>
Since explicitely specifying all values for an insert is a good idea anyhow (you can see directly, what value the new tupel will have), you may want to change your code to this.
>
> Now to the other errors:
> - 28 Long values per row?
> What the heck is wrong with the data design here?
> Honestly, you can save data up to 2 GB in a BLOB/CLOB.
> Currently, your data design allows 56 GB per row.
> Moreover 26 of those columns seems to belong together originally - why do you split them up at all?
>
> - The "ZTB_NAMEOFREPORT" looks like something the users see -
> still there is no unique constraint preventing that you get 10000 of reports with the same name...
You are right. This table looks a bit strange. The story behind it is: Each crystal report in the application has a few textblocks which are the same for all the e.g. persons the e.g. letter is created for. Principally, the textblocks could be directy added to the crystal report. However, as it is often the case, these textblocks may change once in a while. Thus, I put the texts of the textblock into this "strange" db table (one row for each report, one field for each textblock, the name of the report is given by "ztb_nameofreport"). And the application offers a menue by which these textblocks can be changed. Of course, the fields in the table could be of type CHAR, but LONG has the advantage that I do not have to think about the length of the field, since sometime the texts are short and sometimes they are really long.
(These texts would blow up the sql select command of the crystal report very much if they were integrated into the this select command. Thus it is realized in another way: the texts are read before the crystal report is loaded, then the texts are "given" to the crystal report (by its parameters), and finally the crystal report is loaded.)
>
- MaxDB 7.5 Build 26 ?? Where have you been the last years?
> Really - download the 7.6.03 Version [here|https://www.sdn.sap.com/irj/sdn/maxdb-downloads] from SDN and upgrade.
> With 7.6. I was not able to reproduce your issue at all.
The customer still has Win98 clients. MaxDB odbc driver 7.5.00.26 does not work for them. I got the hint to use odbc driver 7.3 (see [lists.mysql.com/maxdb/25667|lists.mysql.com/maxdb/25667]). Do MaxDB 7.6 and odbc driver 7.3 work together?
All Win98 clients may be replaced by WinXP clients in the near future. Then, an upgrade may be reasonable.
>
- Are you really putting your data into the DBA schema? Don't do that, ever.
> DBM/SUPERDBA (the sysdba-schemas) are reserved for the MaxDB system tables.
> Create a user/schema for your application data and put your tables into that.
>
> KR Lars
In the first MaxDB version I used, schemas were not available. I haven't changed it afterwards. Is there an easy way to "move an existing table into a new schema"?
Michael -
Large insert op into table with indexes
Hi,
Oracle 8.1.7.0. Empty table (after truncate) with two indexes. Need to insert about 40 billions records. What is better way to complete this task:
1. Drop indexes, insert data then build indexes.
2. Simply insert data into table.
Thanks.The only way to find out is to test... For example, I did a test on my single-cpu box with Oracle 9i. My test was to load all the rows from DBA_SOURCE (only 650k rows). I found that a single insert statement with bitmap indexes online ran faster than the total elapsed time for taking the indexes offline, inserting, and bringing the indexes back up...
With 40-billion rows, I presume you're using partitioned tables and enabling parrallel DML. Thus, your test will be much different than mine...
In past ETL projects I worked on, I found little difference in timing. I decided that I didn't want to drop indexes (it was ver8i) so I loaded the empty tables with indexes (and constraints) enabled...
Stan -
How to fill internal table with selection screen field.
Hi all,
i am new to sap . pls tell me how to fill internal table with selection screen field.Hi,
Please see the example below:-
I have used both select-options and parameter on the selection-screen.
Understand the same.
* type declaration
TYPES: BEGIN OF t_matnr,
matnr TYPE matnr,
END OF t_matnr,
BEGIN OF t_vbeln,
vbeln TYPE vbeln,
END OF t_vbeln.
* internal table declaration
DATA : it_mara TYPE STANDARD TABLE OF t_matnr,
it_vbeln TYPE STANDARD TABLE OF t_vbeln.
* workarea declaration
DATA : wa_mara TYPE t_matnr,
wa_vbeln TYPE t_vbeln.
* selection-screen field
SELECTION-SCREEN: BEGIN OF BLOCK b1.
PARAMETERS : p_matnr TYPE matnr.
SELECT-OPTIONS : s_vbeln FOR wa_vbeln-vbeln.
SELECTION-SCREEN: END OF BLOCK b1.
START-OF-SELECTION.
* I am adding parameter value to my internal table
wa_mara-matnr = p_matnr.
APPEND wa_mara TO it_mara.
* I am adding select-options value to an internal table
LOOP AT s_vbeln.
wa_vbeln-vbeln = s_vbeln-low.
APPEND wa_vbeln TO it_vbeln.
ENDLOOP.
Regards,
Ankur Parab -
How to compare two rows from two table with different data
how to compare two rows from two table with different data
e.g.
Table 1
ID DESC
1 aaa
2 bbb
3 ccc
Table 2
ID DESC
1 aaa
2 xxx
3 ccc
Result
2Create
table tab1(ID
int ,DE char(10))
Create
table tab2(ID
int ,DE char(10))
Insert
into tab1 Values
(1,'aaa')
Insert
into tab1 Values
(2,'bbb')
Insert
into tab1 Values(3,'ccc')
Insert
into tab1 Values(4,'dfe')
Insert
into tab2 Values
(1,'aaa')
Insert
into tab2 Values
(2,'xx')
Insert
into tab2 Values(3,'ccc')
Insert
into tab2 Values(6,'wdr')
SELECT
tab1.ID,tab2.ID
As T2 from tab1
FULL
join tab2 on tab1.ID
= tab2.ID
WHERE
BINARY_CHECKSUM(tab1.ID,tab1.DE)
<> BINARY_CHECKSUM(tab2.ID,tab2.DE)
OR tab1.ID
IS NULL
OR
tab2.ID IS
NULL
ID column considered as a primary Key
Apart from different record,Above query populate missing record in both tables.
Result Set
ID ID
2 2
4 NULL
NULL 6
ganeshk -
Web Analysis : populate the same table with multiple data sources
Hi folks,
I would like to know if it is possible to populate a table with multiple data sources.
For instance, I'd like to create a table with 3 columns : Entity, Customer and AvgCostPerCust.
Entity and Customer come from one Essbase, AvgCostPerCust comes from HFM.
The objective is to get a calculated member which is Customer * AvgCostPerCust.
Any ideas ?
Once again, thanks for your help.I would like to have the following output:
File 1 - Store 2 - Query A + Store 2 - Query B
File 2 - Store 4 - Query A + Store 4 - Query B
File 3 - Store 5 - Query A + Store 5 - Query B
the bursting level should be give at
File 1 - Store 2 - Query A + Store 2 - Query B
so the tag in the xml has to be split by common to these three rows.
since the data is coming from the diff query, and the data is not going to be under single tag.
you cannot burst it using concatenated data source.
But you can do it, using the datatemplate, and link the query and get the data for each file under a single query,
select distinct store_name from all-stores
select * from query1 where store name = :store_name === 1st query
select * from query2 where store name = :store_name === 2nd query
define the datastructure the way you wanted,
the xml will contain something like this
<stores>
<store> </store> - for store 2
<store> </store> - for store 3
<store> </store> - for store 4
<store> </store> - for store 5
<stores>
now you can burst it at store level. -
Billing report with due date field
Dear Friends,
Can you please tell the logic of Billing report with due date field.
Input. billing document no, date range, sales organisation
Output : billing document no, sales organisation, Amount. Due date.
If any clarification required, Please let me know.
Thanks in advance
RanjanIs it VF05 is suffiant for your purpose?
use further selection criteria tab.
Amit. -
How to fill internal table with no data in debugging mode
Hi all,
I modified one existing program.Now I want to test it.I am not given test data.So in the middle of my debugging, I found that one internal table with no data.My problem is how to fill that internal table with few records in that debugging mode just as we change contents in debugging mode.If I want to proceed further means that internal table must have some records.
Please I dont know how to create test data so I am trying to create values temporarily in debugging mode only.
Thanks,
BalajiHi,
In the debugging do the following..
Click the Table button..
Double click on the internal table name..
Then in the bottom of the screen you will get the buttons like CHANGE, INSERT, APPEND, DELETE..
Use the APPEND button to insert records to the internal table..
Thanks,
Naren -
How To Create Table with Static Data
JDEV 10.1.3
ADF BC
ADF Faces
I am trying to make some simple screen/screenflow diagrams to help flesh out some requirements. To do that, I need to make a table with static data that is not hooked up to a data source (because the data model has not yet been clearly defined, and I'm using the diagrams to help iterate the requirements).
Is it possible to create a table that shows static data (i.e. a set of rows that does not come from a model data source, but rather is hardcoded.)
If not, how does one create mock ups without actually implementing the data model?
Thank you.Deepak, what specifically in those 2 links from Amis are useful? Those 2 posts are about bind variables, not static list of values?
In response to the original poster, I'll attempt to help a little more.
In the 11g release you can create VOs based on a static list of values. However in your case on 10.1.3, the best method I've found is to create a VO based on a SELECT <columns> FROM DUAL statement. The columns then include your dummy data. If you need more than one row, simply UNION ALL a number of SELECT statements together.
What I haven't checked, is when you eventually transform the VO based on the SELECT DUAL statement into a VO based on an EO drawing real data from the database, is it an easy process? I recommend you try this out before committing to the approach above. Let us know how you go.
Regards,
CM. -
Separate table and index data in RAC database
Hi Experts,
Our database is Oracle11g RAC database. I need your expertise on this
Do we need to retain the table and index data in two different tablespaces for performance perspective in RAC database too?
Please share your practical experience…Thanks in advance.
Regards
Richardg777 wrote:
In my opinion, if there is striping implemented then performance shouldn't degrade even if the index and table blocks are in one tablespace. Exactly.. striping is NOT a good idea at tablespace level as a tablespace is a logical storage device. It is very difficult to stripe comprehensively/correctly at that level, if not impossible.
Striping is a function of the actual storage system and need to happen at physical level. A proper RAID0 implementation.
So the question about multiple tablespaces for a performance increase should not be about striping - but about issues such as data management, block sizes, transportable tablespaces and so on.
Thus my question (at the OP) - what performance problems are expected and are these relevant to the number of tablespaces? -
Sample report for filling the database table with test data .
Hi ,
Can anyone provide me sample report for filling the database table with test data ?
Thanks ,
Abhi.hi
the code
data : itab type table of Z6731_DEPTDETAIL,
wa type Z6731_DEPTDETAIL.
wa-DEPT_ID = 'z897hkjh'.
wa-DESCRIPTION = 'computer'.
append wa to itab.
wa-DEPT_ID = 'z897hkjhd'.
wa-DESCRIPTION = 'computer'.
append wa to itab.
loop at itab into wa.
insert z6731_DEPTDETAIL from wa.
endloop.
rewards if helpful
Maybe you are looking for
-
Null MBR in Map Builder 10.1.3.1 for raster
Dear all, I'm building a custom loader for a raster data format. I have the sdo_georaster row populated including the geographic extent including the SRID, and the sdo_raster table populated including the blockmbr which is in row/column coordinates.
-
Shut down when connecting AC Adaptor on low battery
My macbook 1.83 ghz dies when I connect the AC Adaptor because my battery has run down in to the red. I then have to reset the PMU ( I don't bother removing the adaptor or battery for this - just hold the power button for ten seconds) to get it to st
-
E72 - Change Input Language Shortcut
Hi All, Is there any shortcut to change input language through the keypad insted of getting into the differnt manue levels? Regards, wmomran. Solved! Go to Solution.
-
Can delete trace files in bdump
Hi all, i have lot of trace file is user dump directory i deleted all trace file.but i have confusion that whether to delete background trace files which are being generated by backgroung process in bdump directory.please any one suggest me what shou
-
hi my granddaughter has an assignment to print off tomorrow and her screen was blank when she turned the computer on. You can hear it is working but not see anything. Are there a combination of buttons she might have pressed that would change the scr