Pk index design for 500,000 row table
I have 3 tables as following relation:
http://www.visionfly.com/images/dgm.png
Intinerary - (1:N) FlightItem - (1:N) CabinPrice
DepCity, ArrCity, DepDate in Intinerary represent identity of one row in the table. I want to reduce space in FlightItem and Cabin, I add a field of flightId(autoIncrease) as pk of Intinerary. Also I add an index for DepCity, ArrCity, DepDate in Intinerary.
FlightId and FlightNo is pk of FlightItem. FlightId is Fk of FlightItem. FlightId, FlightNo,Cabin,priceType is pk of CabinPrice. FlightId, FlightNo is Fk of CabinPrice.
Interaray will keep about 10,000 rows.
FlightItem will keep about 50,000 rows.
CabinPrice will keep about 500,000 rows.
These 3 tables can regard as a whole. There 2 method operations in them. One is
select * from itinerary a, flightitem f, cabinprice c where a.flightId=f.flightId and f.flightId=c.flightId and f.flightNo=c.flightNo
and a.depcity='zuh' and a.arrcity='sha' and a.depdate='2004-7-1'.
It use index of Intinerary.
There 100 times of select in 1 seconds in highest hits.
Another operation is delete and add new one. I use cascade delete. delete where cause is the same as select. The highest hit is 50 operations in one minutes.
I intent to use ejb cmp to control them. Is the good design for above performance request? Any suggestion will be appericated.
Stephen
this is current design base ms-sql. We are planning to move to Oracle. Ignore data type design.
Stephen
Similar Messages
-
Hi all,
I read following suggestion for a SELECT with LEFT OUTER JOIN in a DB2 consulting company paper for a 10 million-rows table:
SELECT columns
FROM ACCTS A LEFT JOIN OPT1 O1
ON A.ACCT_NO = O1.ACCT_NO
AND A.FLAG1 = ‘Y’
LEFT JOIN OPT2 O2
ON A.ACCT_NO = O2.ACCT_NO
AND A.FLAG2 = ‘Y’
WHERE A.ACCT_NO = 1
For DB2, according to the paper, the following is true: Iff A.FLAG1 <> ‘Y’ Then no Table or Index Access on OPT1 is done. Same for A.FLAG2/OPT2.
I recreated the situation for ORACLE with the following script and came to some really interesting questions
DROP TABLE maintbl CASCADE CONSTRAINTS;
DROP TABLE opt1 CASCADE CONSTRAINTS;
DROP TABLE opt2 CASCADE CONSTRAINTS;
CREATE TABLE maintbl
id INTEGER NOT NULL,
dat VARCHAR2 (2000 CHAR),
opt1 CHAR (1),
opt2 CHAR (1),
CONSTRAINT CK_maintbl_opt1 CHECK(opt1 IN ('Y', 'N')) INITIALLY IMMEDIATE ENABLE VALIDATE,
CONSTRAINT CK_maintbl_opt2 CHECK(opt2 IN ('Y', 'N')) INITIALLY IMMEDIATE ENABLE VALIDATE,
CONSTRAINT PK_maintbl PRIMARY KEY(id)
CREATE TABLE opt1
maintbl_id INTEGER NOT NULL,
adddat1 VARCHAR2 (100 CHAR),
adddat2 VARCHAR2 (100 CHAR),
CONSTRAINT PK_opt1 PRIMARY KEY(maintbl_id),
CONSTRAINT FK_opt1_maintbltable FOREIGN KEY(maintbl_id) REFERENCES maintbl(id)
CREATE TABLE opt2
maintbl_id INTEGER NOT NULL,
adddat1 VARCHAR2 (100 CHAR),
adddat2 VARCHAR2 (100 CHAR),
CONSTRAINT PK_opt2 PRIMARY KEY(maintbl_id),
CONSTRAINT FK_opt2_maintbltable FOREIGN KEY(maintbl_id) REFERENCES maintbl(id)
INSERT ALL
WHEN 1 = 1 THEN
INTO maintbl (ID, opt1, opt2, dat) VALUES (nr, is_even, is_odd, maintbldat)
WHEN is_even = 'N' THEN
INTO opt1 (maintbl_id, adddat1, adddat2) VALUES (nr, adddat1, adddat2)
WHEN is_even = 'Y' THEN
INTO opt2 (maintbl_ID, adddat1, adddat2) VALUES (nr, ADDdat1, ADDdat2)
SELECT LEVEL AS NR,
CASE WHEN MOD(LEVEL, 2) = 0 THEN 'Y' ELSE 'N' END AS is_even,
CASE WHEN MOD(LEVEL, 2) = 1 THEN 'Y' ELSE 'N' END AS is_odd,
TO_CHAR(DBMS_RANDOM.RANDOM) AS maintbldat,
TO_CHAR(DBMS_RANDOM.RANDOM) AS adddat1,
TO_CHAR(DBMS_RANDOM.RANDOM) AS adddat2
FROM DUAL
CONNECT BY LEVEL <= 100;
COMMIT;
SELECT * FROM maintbl
LEFT OUTER JOIN opt1 ON maintbl.id = opt1.maintbl_id AND maintbl.opt1 = 'Y'
LEFT OUTER JOIN opt2 ON maintbl.id = opt2.maintbl_id AND maintbl.opt2 = 'Y'
WHERE id = 1;
Explain plan for "Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi":
http://i.imgur.com/f0AiA.png
As one can see, the DB uses a view to index-access the opt tables iff indicator column maintbl.opt1='Y' in the main table.
Explain plan for "Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production":
http://i.imgur.com/iKfj8.png
As one can see, the DB does NOT use the view, instead uses a pretty useless case-statement
Now my questions:
1) What does the optimizer do in 11.2 XE?!?
2) In General: Do you suggest this table-setup? Does your yes/no suggestion depend on the rowcount in the tables? Of course I see the problem with incorrectly updated columns and would NEVER do it if there is another truly relational solution with same performance possibly.
3) Is there a way to avoid performance issues if I don't use an indicator column in ORACLE? Is this what a [Bitmap Join Index|http://docs.oracle.com/cd/E11882_01/server.112/e25789/indexiot.htm#autoId14] is for?
Thanks in advance and happy discussing,
BlamaFair enough. I've included a cut-down set of SQL below.
CREATE TABLE DIMENSION_DATE
DATE_ID NUMBER,
CALENDAR_DATE DATE,
CONSTRAINT DATE_ID
PRIMARY KEY
(DATE_ID)
CREATE UNIQUE INDEX DATE_I1 ON DIMENSION_DATE
(CALENDAR_DATE, DATE_ID);
CREATE TABLE ORDER_F
ORDER_ID VARCHAR2(40 BYTE),
SUBMITTEDDATE_FK NUMBER,
COMPLETEDDATE_FK NUMBER,
-- Then I add the first bitmap index, which works:
CREATE BITMAP INDEX SUBMITTEDDATE_FK ON ORDER_F
(DIMENSION_DATE.DATE_ID)
FROM ORDER_F, DIMENSION_DATE
WHERE ORDER_F.SUBMITTEDDATE_FK = DIMENSION_DATE.DATE_ID;
-- Then attempt the next one:
CREATE BITMAP INDEX completeddate_fk
ON ORDER_F(b.date_id)
FROM ORDER_F, DIMENSION_DATE b
WHERE ORDER_F.completeddate_fk = b.date_id;
-- which results in:
-- ORA-01408: such column list already indexed -
Performance problem: 1.000 queries over a 1.000.000 rows table
Hi everybody!
I have a difficult performance problem: I use JDBC over an ORACLE database. I need to build a map using data from a table with around 1.000.000 rows. My query is very simple (see the code) and takes an average of 900 milliseconds, but I must perform around 1.000 queries with different parameters. The final result is that user must wait several minutes (plus the time needed to draw the map and send it to the client)
The code, very simplified, is the following:
String sSQLCreateView =
"CREATE VIEW " + sViewName + " AS " +
"SELECT RIGHT_ASCENSION, DECLINATION " +
"FROM T_EXO_TARGETS " +
"WHERE (RIGHT_ASCENSION BETWEEN " + dRaMin + " AND " + dRaMax + ") " +
"AND (DECLINATION BETWEEN " + dDecMin + " AND " + dDecMax + ")";
String sSQLSentence =
"SELECT COUNT(*) FROM " + sViewName +
" WHERE (RIGHT_ASCENSION BETWEEN ? AND ?) " +
"AND (DECLINATION BETWEEN ? AND ?)";
PreparedStatement pstmt = in_oDbConnection.prepareStatement(sSQLSentence);
for (int i = 0; i < 1000; i++)
pstmt.setDouble(1, a);
pstmt.setDouble(2, b);
pstmt.setDouble(3, c);
pstmt.setDouble(4, d);
ResultSet rset = pstmt.executeQuery();
X = rset.getInt(1);
I have yet created index with RIGHT_ASCENSION and DECLINATION fields (trying different combinations).
I have tried yet multi-threads, with very bad results
Has anybody a suggestion ?
Thank you very much!How many total rows are there likely to be in the View you create?
Perhaps just do a select instead of a view, and loop thru the resultset totalling the ranges in java instead of trying to have 1000 queries do the job. Something like:
int iMaxRanges = 1000;
int iCount[] = new int[iMaxRanges];
class Range implements Comparable
float fAMIN;
float fAMAX;
float fDMIN;
float fDMAX;
float fDelta;
public Range(float fASC_MIN, float fASC_MAX, float fDEC_MIN, float fDEC_MAX)
fAMIN = fASC_MIN;
fAMAX = fASC_MAX;
fDMIN = fDEC_MIN;
fDMAX = fDEC_MAX;
public int compareTo(Object range)
Range comp = (Range)range;
if (fAMIN < comp.fAMIN)
return -1;
if (fAMAX > comp.fAMAX)
return 1;
if (fDMIN < comp.fDMIN)
return -1;
if (fDMAX > comp.fDMAX)
return 1;
return 0;
List listRanges = new ArrayList(iMaxRanges);
listRanges.add(new Range(1.05, 1.10, 120.5, 121.5));
//...etc.
String sSQL =
"SELECT RIGHT_ASCENSION, DECLINATION FROM T_EXO_TARGETS " +
"WHERE (RIGHT_ASCENSION BETWEEN " + dRaMin + " AND " + dRaMax + ") " +
"AND (DECLINATION BETWEEN " + dDecMin + " AND " + dDecMax + ")";
Statement stmt = in_oDbConnection.createStatement();
ResultSet rset = stmt.executeQuery(sSQL);
while (rset.next())
float fASC = rset.getFloat("RIGHT_ASCENSION");
flaot fDEC = rset.getFloat("DECLINATION");
int iRange = Collections.binarySearch(listRanges, new Range(fASC, fASC, fDEC, fDEC));
if (iRange >= 0)
++iCount[iRange]; -
MatchCode for 100.000 rows = dump
Hi!
I use FM F4IF_INT_TABLE_VALUE_REQUEST for a personal matchcode in a dynpro field.
But if the internal table has got 100.000 rows, the system dump.
How can I do for display the match code without dump?
Thanks very much!A matchcode where you have more than 100.000 rows is not a good matchcode !
You should provide at least some criterion to restrict the list. The maximum number of hits is only 4 digits in SAP and you should always restrict your list according to this maximum
you do this by adding to your SELECT statement:
up to callcontrol-maxrecords rows -
Costs for 1.000 rows of PL/SQL code?
Hi to all of you!
What do you think how much time is necessary to develop one PL/SQL package (from first analysis to end-production) with 1.000 lines of code?
Averaged complexity, specification, .... of the requirements.
All estimations are welcome...Hi to all of you!
What do you think how much time is necessary to
develop one PL/SQL package (from analysis to
production) with 1.000 lines of code?Depends on the task, I'd say. What is that PL/SQL
package supposed to do?
Averaged complexity, specification, .... of the
requirements.Please define averaged complexity, specification, etc.
All answers are welcome...Try this:
SELECT TRUNC(ABS(dbms_random.NORMAL) * 10) no_of_dev_days_for_package
FROM dual
It may be a wild guess, but is that some kind of in-house
quiz?
C. -
Transposing column names to row for a multi row table
Hi
I have a table some thing like this that needs to be transposed.
FIELD_POPULATED First_NM**Last_NM**Mi_NM*
A_NULL 0 0 0
A_NOT_NULL 120 120 120
B_NULL 0 0 0
B_NOT_NULL 0 0 0
The above table has to be transposed as
column_name*A_NULL**A_NOT_NULL**B_NULL* B_NOT_NULL
FIRST_NM 0 120 0 0
Last_NM 0 120 0 0
Mi_NM 0 120 0 0
I am working oracle on 11g. Any help is greatly appreciatedHi,
See this thread:
Re: Help with PIVOT query (or advice on best way to do this) -
CSV output with more then 10,000 rows ( trick)
Dear All,
Tricked this sort of output by:
- Making a page that displays all rows.
- Column value have hidden comma's
- Javascript opens new window
- Javascript write all rows of the page with report to the new window
- Javascript tells the new window to do "Save As'
- Javascript closes the new window
Advice: keep report simple, otherwise browser will take up a lot of memory
Steps:
1. creating a page with max 100000 rows, report attributes:
Number of Rows 100000
Max Row Count 100000
2. create "Report template"
- set HTML id for report:
"before rows": <table id="report_loc">
- rows that displayes "invisible" comma's
"Column Templates" : <td>#COLUMN_VALUE#<span class="fh">,</span></td>
- "after rows"
<!-- after rows -->
</table>
- colum heading with "invisible" comma's
"Column Headings" : <td align="#ALIGN#">#COLUMN_HEADER#<span style="color:white;">,</span></td>
2 add Javascript and HTML to page
- in "HTML header" add:
<style type="text/css">
.fh { color:white; }
</style>
and:
<script type="text/javascript">
function SaveCSV(){
myWindow1 = window.open ("", "Wait", 'width=200,height=200');
myWindow1.document.writeln ( report_loc.innerText );
myWindow1.document.execCommand("SaveAs", null, "wsl_report.csv");
myWindow1.close();
</script>
Hope this workaround helps...
Regards, ErikPlease add button to save your CSV. This buttons calls the SaveCSV function
Create "button template" and add this button to your page:
Download CSV
Sorry, I forgot this last step.... -
Cache problem is the limitation of 500,000 images. REMOVE
Mac Pro OSX 10.6 thru 10.9.
I have Bridge CS6.
I have a DEDICATED HARD DRIVE for nothing except Bridge Cache.
My problem is....
Bridge only allows for 500,000 images to be cached.
I have nearly 3 million image files.
1. Is there a way to remove this limitation?
If not....
PLEASE !!!!
Provide an option in Bridge Preferences that allows this limitation to be removed.
Simply include a Dialog Warning explaining that Bridge Cache can use up huge amounts of disk space.
Warn the user that this "No Limit" option requires a Dedicated Cache Hard Drive (internal or external) otherwise there will be crashes and hangs and the potential to use up a system boot drive's space.
For those of us who have more than 500,000 images, this change (suggested Bridge Upgrade) would be extremely helpful.
Please do not make me repeat myself.
I have a DEDICATED HARD DRIVE for nothing except Bridge Cache.
This Bride Cache Hard Drive is huge.
My only problem is the limitation of 500,000 images.
This restriction needs removal.I understand your frustration, but then I'm not Adobe.
This is not a conduit to communicate with Adobe. Remember, you are not addressing Adobe here in the user forums. You are requesting help from volunteers users just like you who give their time free of charge.
Supposedly there are dedicated Adobe forums for feature requests and for reporting bugs. I lost track of where those are, probably because posting there is as futile as posting here.
Perhaps searching the forums directory or searching the forum itself may provide you with a link. -
Code for double clicking rows in alvgrido/p and moving it to internal tabl
hi,
code for double clicking rows in alvgrido/p and moving it to internal tablehi,
see the following code which uses layout , double_click event in ALVGRID.
TABLES: mara,marc.
DATA:obj_custom TYPE REF TO cl_gui_custom_container,
obj_alv TYPE REF TO cl_gui_alv_grid.
DATA: it_mara TYPE TABLE OF mara,
wa_mara TYPE mara,
wa_layout TYPE lvc_s_layo,
wa_variant TYPE disvariant,
x_save.
DATA:it_marc TYPE TABLE OF marc,
wa_marc TYPE marc.
SELECT-OPTIONS: s_matnr FOR mara-matnr DEFAULT 1 TO 500.
START-OF-SELECTION.
SELECT * FROM mara INTO TABLE it_mara
WHERE matnr IN s_matnr.
CALL SCREEN '100'.
CLASS cl_dbclick DEFINITION
CLASS cl_dbclick DEFINITION.
PUBLIC SECTION.
METHODS dbl FOR EVENT double_click OF cl_gui_alv_grid
IMPORTING e_row e_column.
ENDCLASS.
DATA: obj1 TYPE REF TO cl_dbclick.
CLASS cl_dbclick IMPLEMENTATION
CLASS cl_dbclick IMPLEMENTATION.
METHOD dbl.
IF e_row-rowtype = space AND NOT e_row-index IS INITIAL.
READ TABLE it_mara INDEX e_row-index INTO wa_mara.
SELECT * FROM marc INTO TABLE it_marc
WHERE matnr = wa_mara-matnr.
CALL METHOD obj_custom->free
EXCEPTIONS
cntl_error = 1
cntl_system_error = 2
OTHERS = 3.
IF sy-subrc <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
CALL SCREEN '200'.
ENDIF.
ENDMETHOD. "dbl
ENDCLASS. "cl_dbclick IMPLEMENTATION
*& Module USER_COMMAND_0100 INPUT
text
MODULE user_command_0100 INPUT.
CALL METHOD obj_custom->free
EXCEPTIONS
cntl_error = 1
cntl_system_error = 2
OTHERS = 3.
IF sy-subrc <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
CASE sy-ucomm.
WHEN 'BACK'.
LEAVE PROGRAM.
ENDCASE.
ENDMODULE. " USER_COMMAND_0100 INPUT
*& Module filldata OUTPUT
text
MODULE filldata OUTPUT.
CREATE OBJECT obj_custom
EXPORTING
container_name = 'CONTROL'.
CREATE OBJECT obj_alv
EXPORTING
i_parent = obj_custom.
CREATE OBJECT obj1.
SET HANDLER obj1->dbl FOR obj_alv.
CALL METHOD obj_alv->set_table_for_first_display
EXPORTING
i_buffer_active =
i_bypassing_buffer =
i_consistency_check =
i_structure_name = 'MARA'
is_variant = wa_variant
i_save = x_save
i_default = 'X'
is_layout = wa_layout
is_print =
it_special_groups =
it_toolbar_excluding =
it_hyperlink =
it_alv_graphics =
it_except_qinfo =
ir_salv_adapter =
CHANGING
it_outtab = it_mara
it_fieldcatalog =
it_sort =
it_filter =
EXCEPTIONS
invalid_parameter_combination = 1
program_error = 2
too_many_lines = 3
others = 4
IF sy-subrc <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
ENDMODULE. " filldata OUTPUT
*& Module STATUS_0100 OUTPUT
text
MODULE status_0100 OUTPUT.
SET PF-STATUS 'STATUS'.
SET TITLEBAR 'xxx'.
ENDMODULE. " STATUS_0100 OUTPUT
*& Module STATUS_0200 OUTPUT
text
MODULE status_0200 OUTPUT.
SET PF-STATUS 'STATUS'.
* SET TITLEBAR 'xxx'.
SUPPRESS DIALOG.
SET PARAMETER ID 'MAT' FIELD wa_mara-matnr.
LEAVE TO LIST-PROCESSING AND RETURN TO SCREEN 0.
*WRITE:/ wa_mara-matnr,
wa_mara-mbrsh,
wa_mara-meins.
CREATE OBJECT obj_custom
EXPORTING
container_name = 'CONTROL'.
CREATE OBJECT obj_alv
EXPORTING
i_parent = obj_custom.
CALL METHOD obj_alv->set_table_for_first_display
EXPORTING
i_buffer_active =
i_bypassing_buffer =
i_consistency_check =
i_structure_name = 'MARC'
is_variant = wa_variant
i_save = x_save
i_default = 'X'
is_layout = wa_layout
is_print =
it_special_groups =
it_toolbar_excluding =
it_hyperlink =
it_alv_graphics =
it_except_qinfo =
ir_salv_adapter =
CHANGING
it_outtab = it_marc
it_fieldcatalog =
it_sort =
it_filter =
EXCEPTIONS
invalid_parameter_combination = 1
program_error = 2
too_many_lines = 3
others = 4
IF sy-subrc <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
ENDMODULE. " STATUS_0200 OUTPUT
*& Module layout OUTPUT
text
MODULE layout OUTPUT.
wa_layout-grid_title = 'MATERIAL DATA'.
wa_layout-zebra = 'X'.
wa_layout-edit = 'X'.
ENDMODULE. " layout OUTPUT
*& Module variant OUTPUT
text
MODULE variant OUTPUT.
wa_variant-report = 'ZALV_GRID1'.
x_save = 'A'.
ENDMODULE. " variant OUTPUT
*& Module USER_COMMAND_0200 INPUT
text
MODULE user_command_0200 INPUT.
CASE sy-ucomm.
WHEN 'BACK'.
CALL METHOD obj_custom->free
EXCEPTIONS
cntl_error = 1
cntl_system_error = 2
OTHERS = 3.
IF sy-subrc <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
LEAVE TO SCREEN '100'.
ENDCASE.
ENDMODULE. " USER_COMMAND_0200 INPUT
thanks,
raji
reward if helpful -
What the best table design for item and it's serials ?
I have Item's Table that hold Items meta data
what the solution in the case that i bought 500 Item (Ex: Hard Disk) within different serial no for each item
when i save this serials will be in one to many ...
when i think in this solution i found that the table have serial no will be in 2 months about 500,000 record
is that best case or there is another solution ?
another one ,
If i have item with multiple colors
best case to save every item with its color as one item (item_ID,Item_name ,Item_color,....)
or save item in table and colors in table ... and create table to obtain many to many relationship
what the best case and Fastest ?I have Item's Table that hold Items meta data
what the solution in the case that i bought 500 Item (Ex: Hard Disk) within different serial no for each item
when i save this serials will be in one to many ...
when i think in this solution i found that the table have serial no will be in 2 months about 500,000 record
is that best case or there is another solution ?
another one ,
If i have item with multiple colors
best case to save every item with its color as one item (item_ID,Item_name ,Item_color,....)
or save item in table and colors in table ... and create table to obtain many to many relationship
what the best case and Fastest ?
1. I don't think there is any problem in having 1:N relation between ITEM and SERIAL table. That is how relational database works.
2. I will recomend you to use 1:N rather than N:M. For DML operation 1:N has minimum impact when compare to N:M. For instance if the item color field is added in the serial number table, i may need to update multiple rows however if i add the color code
field in item table then i may need to update or insert only one row.
------------------ RECOMENDATION
ITEM
ITEM_ID int (PK)
ITEM_NAME VARCHAR(50)
ITEM_DESCRIPTION VARCHAR(100)
ITEM_TYPE VARCHAR(10)
ITEM_CATEGORY VARCHAR(10)
ITEM_COLOR VARCHAR(10)
SERIAL
ITEM_ID INT (FK)
SERIAL_NUM VARCHAR(20) (UNIQUE)
Regards, RSingh -
My Scenario is like this:*
Hi i have 2 fact tables fact1 and fact 2 and four dimension tables D1,D2,D3 ,D4 & D1.1 ,D1.2 the relations in the data model is like this :
NOTE: D1.1 and D1.2 are derived from D1 So D1 might be snow Flake.
[( D1.. 1:M..> Fact 1 , D1.. 1:M..> Fact 2 ), (D2.. 1:M..> Fact 1 , D2.. 1:M..> Fact 2 ), ( D3.. 1: M.> Fact 1 , D3.. 1:M..> Fact 2 ),( D4.. 1:M..> Fact 1 , D4 ... 1:M..> Fact 2 )]
Now from D1 there is a child level like this: [D1 --(1:M)..> D1.1 and from D1.1.. 1:M..> D1.2.. 1:M..> D4]
Please help me in modeling these for making a report of 10,000 rows and also let me know for which tables do i need to enable cache?
PS: There shouldn't be performance issue so please help me in modeling this.
Thanks in Advance for the Experts who are helping me for a while.Shudn't be much problem with just these many rows...
Model something like this only Re: URGENT MODELING SNOW FLAKE SCHEMA
There are various ways of handling performance issues if any in OBIEE.
Go for caching strategy for complete warehouse. Make sure to purge it after every data load..If you have aggr calculations at higher level then you can also go for aggregated tables in OBIEE for better performance.
http://www.rittmanmead.com/2007/10/using-the-obiee-aggregate-persistence-wizard/
Hope this is clear...Go ahead with actual implementation and lets us know incase you encounter any major issues.
Cheers -
Making a field visible for a particular row in table control.
Hi Experts,
I have a scenario where in there are 7 columns in table control wherein last column I have made invisible. Now I got to make that column visible only for the selected row and this column for the rest of the rows should be invisible.
How can I achieve this. Tried a lot playing with table control current line and cols structure but no fruit.
Please help. <removed by moderator>
Thanks,
Edited by: Thomas Zloch on Oct 19, 2010 2:38 PMHi All,
Sorry for late response. I was out of station for a while.
I tried a lot after going through the link provided by Robert and also tried what Anmol has suggested but its not working,
The column becomes invisible but it is not becoming visible again for the selected row.
The code I have written is as follows.
process before output.
module status_9000.
module tc_details_change_tc_attr.
loop at it_test
into wa_test
with control tc_details
cursor tc_details-current_line.
module tc_details_get_lines.
module customize. " Module to update table control dynamically
endloop.
module CUSTOMIZE output.
LOOP AT TC_DETAILS-cols INTO cols.
if cols-screen-name = 'WA_TEST-REMARKS'. " and wa_test-sel = 'X' ).
IF WA_TEST-SEL = 'X'.
COLS-SCREEN-ACTIVE = '1'.
COLS-INVISIBLE = '0'.
MODIFY TC_DETAILS-COLS FROM COLS INDEX SY-TABIX.
ELSE.
COLS-SCREEN-ACTIVE = '0'.
COLS-INVISIBLE = '1'.
MODIFY TC_DETAILS-COLS FROM COLS INDEX SY-TABIX.
ENDIF..
ENDIF.
ENDLOOP.
endmodule.
Please help. -
Please Help, I want to change field value in a table, based on another field value in the same row (for each added row)
I am using this code :
<HTML>
<HEAD>
<SCRIPT>
function addRow(tableID) {
var table = document.getElementById(tableID);
var rowCount = table.rows.length;
var row = table.insertRow(rowCount);
var colCount = table.rows[0].cells.length;
for(var i=0; i<colCount; i++ ) {
var newcell = row.insertCell(i);
newcell.innerHTML = table.rows[1].cells[i].innerHTML;
switch(newcell.childNodes[0].type) {
case "text":
newcell.childNodes[0].value = "";
break;
case "checkbox":
newcell.childNodes[0].checked = false;
break;
case "select-one":
newcell.childNodes[0].selectedIndex = 0;
break;}}}
function deleteRow(tableID) {
try {var table = document.getElementById(tableID);
var rowCount = table.rows.length;
for(var i=0; i<rowCount; i++) {
var row = table.rows[i];
var chkbox = row.cells[0].childNodes[0];
if(null != chkbox && true == chkbox.checked) {
if(rowCount <= 2) {
alert("Cannot delete all the rows.");
break;}
table.deleteRow(i);
rowCount--;
i--;}}}catch(e) {alert(e);}}
</SCRIPT>
</HEAD>
<BODY>
<INPUT type="button" value="Add Row" onClick="addRow('dataTable')" />
<INPUT type="button" value="Delete Row" onClick="deleteRow('dataTable')" />
<TABLE id="dataTable" width="350px" border="1">
<TR>
<TD width="32"></TD>
<TD width="119" align="center"><strong>Activity</strong></TD>
<TD width="177" align="center"><strong>Cost</strong></TD>
</TR>
<TR>
<TD><INPUT type="checkbox" name="chk"/></TD>
<TD>
<select name="s1" id="s1">
<option value="1">1</option>
<option value="2">2</option>
<option value="3">3</option>
</select>
</TD>
<TD><input type="text" name="txt1" id="txt1"></TD>
</TR>
</TABLE>
</BODY>
</HTML>Hi,
Let me make sure u r working with table control.
First u have to create a event(VALIDATE) to do the validation.
Inside the event,
1. First get the current index where user has pointed the curson
2. Once u get the index read the internal table with index value.
3. Now u can compare the col1 and col2 values and populate the error message.
1. DATA : lo_elt TYPE REF TO if_wd_context_element,
l_index type i.
lo_elt = wdevent->get_context_element( name = 'CONTEXT_ELEMENT' ).
CALL METHOD LO_ELT->GET_INDEX( RECEIVING MY_INDEX = l_index.
above code should be written inside the event.
Thanks, -
Query taking more than 1/2 hour for 80 million rows in fact table
Hi All,
I am stuck in this query as it it taking more than 35 mins to execute for 80 million rows. My SLA is less than 30 mins for 160 million rows i.e. double the number.
Below is the query and the Execution Plan.
SELECT txn_id AS txn_id,
acntng_entry_src AS txn_src,
f.hrarchy_dmn_id AS hrarchy_dmn_id,
f.prduct_dmn_id AS prduct_dmn_id,
f.pstng_crncy_id AS pstng_crncy_id,
f.acntng_entry_typ AS acntng_entry_typ,
MIN (d.date_value) AS min_val_dt,
GREATEST (MAX (d.date_value),
LEAST ('07-Feb-2009', d.fin_year_end_dt))
AS max_val_dt
FROM Position_Fact f, Date_Dimension d
WHERE f.val_dt_dmn_id = d.date_dmn_id
GROUP BY txn_id,
acntng_entry_src,
f.hrarchy_dmn_id,
f.prduct_dmn_id,
f.pstng_crncy_id,
f.acntng_entry_typ,
d.fin_year_end_dt
Execution Plan is as:
11 HASH JOIN Cost: 914,089 Bytes: 3,698,035,872 Cardinality: 77,042,414
9 TABLE ACCESS FULL TABLE Date_Dimension Cost: 29 Bytes: 94,960 Cardinality: 4,748
10 TABLE ACCESS FULL TABLE Position_Fact Cost: 913,693 Bytes: 2,157,187,592 Cardinality: 77,042,414
Kindly suggest, how to make it faster.
Regards,
SidThe above is just a part of the query that is taking the maximum time.
Kindly find the entire query and the plan as follows:
WITH MIN_MX_DT
AS
( SELECT
TXN_ID AS TXN_ID,
ACNTNG_ENTRY_SRC AS TXN_SRC,
F.HRARCHY_DMN_ID AS HRARCHY_DMN_ID,
F.PRDUCT_DMN_ID AS PRDUCT_DMN_ID,
F.PSTNG_CRNCY_ID AS PSTNG_CRNCY_ID,
F.ACNTNG_ENTRY_TYP AS ACNTNG_ENTRY_TYP,
MIN (D.DATE_VALUE) AS MIN_VAL_DT,
GREATEST (MAX (D.DATE_VALUE), LEAST (:B1, D.FIN_YEAR_END_DT))
AS MAX_VAL_DT
FROM
proj_PSTNG_FCT F, proj_DATE_DMN D
WHERE
F.VAL_DT_DMN_ID = D.DATE_DMN_ID
GROUP BY
TXN_ID,
ACNTNG_ENTRY_SRC,
F.HRARCHY_DMN_ID,
F.PRDUCT_DMN_ID,
F.PSTNG_CRNCY_ID,
F.ACNTNG_ENTRY_TYP,
D.FIN_YEAR_END_DT),
SLCT_RCRDS
AS (
SELECT
M.TXN_ID,
M.TXN_SRC,
M.HRARCHY_DMN_ID,
M.PRDUCT_DMN_ID,
M.PSTNG_CRNCY_ID,
M.ACNTNG_ENTRY_TYP,
D.DATE_VALUE AS VAL_DT,
D.DATE_DMN_ID,
D.FIN_WEEK_NUM AS FIN_WEEK_NUM,
D.FIN_YEAR_STRT AS FIN_YEAR_STRT,
D.FIN_YEAR_END AS FIN_YEAR_END
FROM
MIN_MX_DT M, proj_DATE_DMN D
WHERE
D.HOLIDAY_IND = 0
AND D.DATE_VALUE >= MIN_VAL_DT
AND D.DATE_VALUE <= MAX_VAL_DT),
DLY_HDRS
AS (
SELECT
S.TXN_ID AS TXN_ID,
S.TXN_SRC AS TXN_SRC,
S.DATE_DMN_ID AS VAL_DT_DMN_ID,
S.HRARCHY_DMN_ID AS HRARCHY_DMN_ID,
S.PRDUCT_DMN_ID AS PRDUCT_DMN_ID,
S.PSTNG_CRNCY_ID AS PSTNG_CRNCY_ID,
SUM
DECODE
PNL_TYP_NM,
:B5, DECODE (NVL (F.PSTNG_TYP, :B2),
:B2, NVL (F.PSTNG_AMNT, 0) * (-1),
NVL (F.PSTNG_AMNT, 0)),
0))
AS MTM_AMT,
NVL (
LAG (
SUM (
DECODE (
PNL_TYP_NM,
:B5, DECODE (NVL (F.PSTNG_TYP, :B2),
:B2, NVL (F.PSTNG_AMNT, 0) * (-1),
NVL (F.PSTNG_AMNT, 0)),
0)))
OVER (
PARTITION BY S.TXN_ID,
S.TXN_SRC,
S.HRARCHY_DMN_ID,
S.PRDUCT_DMN_ID,
S.PSTNG_CRNCY_ID
ORDER BY S.VAL_DT),
0)
AS YSTDY_MTM,
SUM (
DECODE (
PNL_TYP_NM,
:B4, DECODE (NVL (F.PSTNG_TYP, :B2),
:B2, NVL (F.PSTNG_AMNT, 0) * (-1),
NVL (F.PSTNG_AMNT, 0)),
0))
AS CASH_AMT,
SUM (
DECODE (
PNL_TYP_NM,
:B3, DECODE (NVL (F.PSTNG_TYP, :B2),
:B2, NVL (F.PSTNG_AMNT, 0) * (-1),
NVL (F.PSTNG_AMNT, 0)),
0))
AS PAY_REC_AMT,
S.VAL_DT,
S.FIN_WEEK_NUM,
S.FIN_YEAR_STRT,
S.FIN_YEAR_END,
NVL (TRUNC (F.REVSN_DT), S.VAL_DT) AS REVSN_DT,
S.ACNTNG_ENTRY_TYP AS ACNTNG_ENTRY_TYP
FROM
SLCT_RCRDS S,
proj_PSTNG_FCT F,
proj_ACNT_DMN AD,
proj_PNL_TYP_DMN PTD
WHERE
S.TXN_ID = F.TXN_ID(+)
AND S.TXN_SRC = F.ACNTNG_ENTRY_SRC(+)
AND S.HRARCHY_DMN_ID = F.HRARCHY_DMN_ID(+)
AND S.PRDUCT_DMN_ID = F.PRDUCT_DMN_ID(+)
AND S.PSTNG_CRNCY_ID = F.PSTNG_CRNCY_ID(+)
AND S.DATE_DMN_ID = F.VAL_DT_DMN_ID(+)
AND S.ACNTNG_ENTRY_TYP = F.ACNTNG_ENTRY_TYP(+)
AND SUBSTR (AD.ACNT_NUM, 0, 1) IN (1, 2, 3)
AND NVL (F.ACNT_DMN_ID, 1) = AD.ACNT_DMN_ID
AND NVL (F.PNL_TYP_DMN_ID, 1) = PTD.PNL_TYP_DMN_ID
GROUP BY
S.TXN_ID,
S.TXN_SRC,
S.DATE_DMN_ID,
S.HRARCHY_DMN_ID,
S.PRDUCT_DMN_ID,
S.PSTNG_CRNCY_ID,
S.VAL_DT,
S.FIN_WEEK_NUM,
S.FIN_YEAR_STRT,
S.FIN_YEAR_END,
TRUNC (F.REVSN_DT),
S.ACNTNG_ENTRY_TYP,
F.TXN_ID)
SELECT
D.TXN_ID,
D.VAL_DT_DMN_ID,
D.REVSN_DT,
D.TXN_SRC,
D.HRARCHY_DMN_ID,
D.PRDUCT_DMN_ID,
D.PSTNG_CRNCY_ID,
D.YSTDY_MTM,
D.MTM_AMT,
D.CASH_AMT,
D.PAY_REC_AMT,
MTM_AMT + CASH_AMT + PAY_REC_AMT AS DLY_PNL,
SUM (
MTM_AMT + CASH_AMT + PAY_REC_AMT)
OVER (
PARTITION BY D.TXN_ID,
D.TXN_SRC,
D.HRARCHY_DMN_ID,
D.PRDUCT_DMN_ID,
D.PSTNG_CRNCY_ID,
D.FIN_WEEK_NUM || D.FIN_YEAR_STRT || D.FIN_YEAR_END
ORDER BY D.VAL_DT)
AS WTD_PNL,
SUM (
MTM_AMT + CASH_AMT + PAY_REC_AMT)
OVER (
PARTITION BY D.TXN_ID,
D.TXN_SRC,
D.HRARCHY_DMN_ID,
D.PRDUCT_DMN_ID,
D.PSTNG_CRNCY_ID,
D.FIN_YEAR_STRT || D.FIN_YEAR_END
ORDER BY D.VAL_DT)
AS YTD_PNL,
D.ACNTNG_ENTRY_TYP AS ACNTNG_PSTNG_TYP,
'EOD ETL' AS CRTD_BY,
SYSTIMESTAMP AS CRTN_DT,
NULL AS MDFD_BY,
NULL AS MDFCTN_DT
FROM
DLY_HDRS D
Plan
SELECT STATEMENT ALL_ROWSCost: 11,950,256 Bytes: 3,369,680,886 Cardinality: 7,854,734
25 WINDOW SORT Cost: 11,950,256 Bytes: 3,369,680,886 Cardinality: 7,854,734
24 WINDOW SORT Cost: 11,950,256 Bytes: 3,369,680,886 Cardinality: 7,854,734
23 VIEW Cost: 10,519,225 Bytes: 3,369,680,886 Cardinality: 7,854,734
22 WINDOW BUFFER Cost: 10,519,225 Bytes: 997,551,218 Cardinality: 7,854,734
21 SORT GROUP BY Cost: 10,519,225 Bytes: 997,551,218 Cardinality: 7,854,734
20 HASH JOIN Cost: 10,296,285 Bytes: 997,551,218 Cardinality: 7,854,734
1 TABLE ACCESS FULL TABLE proj_PNL_TYP_DMN Cost: 3 Bytes: 45 Cardinality: 5
19 HASH JOIN Cost: 10,296,173 Bytes: 2,695,349,628 Cardinality: 22,841,946
5 VIEW VIEW index$_join$_007 Cost: 3 Bytes: 84 Cardinality: 7
4 HASH JOIN
2 INDEX FAST FULL SCAN INDEX (UNIQUE) proj_ACNT_DMN_PK Cost: 1 Bytes: 84 Cardinality: 7
3 INDEX FAST FULL SCAN INDEX (UNIQUE) proj_ACNT_DMN_UNQ Cost: 1 Bytes: 84 Cardinality: 7
18 HASH JOIN RIGHT OUTER Cost: 10,293,077 Bytes: 68,925,225,244 Cardinality: 650,237,974
6 TABLE ACCESS FULL TABLE proj_PSTNG_FCT Cost: 913,986 Bytes: 4,545,502,426 Cardinality: 77,042,414
17 VIEW Cost: 7,300,017 Bytes: 30,561,184,778 Cardinality: 650,237,974
16 MERGE JOIN Cost: 7,300,017 Bytes: 230,184,242,796 Cardinality: 650,237,974
8 SORT JOIN Cost: 30 Bytes: 87,776 Cardinality: 3,376
7 TABLE ACCESS FULL TABLE proj_DATE_DMN Cost: 29 Bytes: 87,776 Cardinality: 3,376
15 FILTER
14 SORT JOIN Cost: 7,238,488 Bytes: 25,269,911,792 Cardinality: 77,042,414
13 VIEW Cost: 1,835,219 Bytes: 25,269,911,792 Cardinality: 77,042,414
12 SORT GROUP BY Cost: 1,835,219 Bytes: 3,698,035,872 Cardinality: 77,042,414
11 HASH JOIN Cost: 914,089 Bytes: 3,698,035,872 Cardinality: 77,042,414
9 TABLE ACCESS FULL TABLE proj_DATE_DMN Cost: 29 Bytes: 94,960 Cardinality: 4,748
10 TABLE ACCESS FULL TABLE proj_PSTNG_FCT Cost: 913,693 Bytes: 2,157,187,592 Cardinality: 77,042,414 -
Display 100,000 rows in Table View
Hi,
I am in receipt of a strange requirement from a customer, who wants a report which returns about 100,000 rows which is based on a Direct Database Request.
I understand that OBIEE is not an extraction tool, and that any report which has more than 100-200 rows is not very useful. However, the customer is insistent that such a report be generated.
The report returns about 97,000 rows and has about 12 columns and is displayed as a Table View.
To try and generate the report, i have set the ResultRowLimit in the instanceconfig.xml file to 150,000 and restarted the services. I have also set the query limits in the RPD to 150,000, so this is not the issue as well.
When running the report, the session log shows the record count as 97,452 showing that all the records are available in the BI Server.
However, when i click on the display all the rows button at the end of the report, the browser hangs after about 10 minutes with nothing being displayed.
I have gone through similar posts, but there was nothing conclusive mentioned in them. Any input to fix the above issue will be highly appreciated.
Thanks,
Ab
Edited by: obiee_user_ab on Nov 9, 2010 8:25 PMHi Saichand,
The client wants the data to be downloaded in CSV, so the row limit in the Excel template, that OBIEE uses is not an issue.
The 100,000 rows that are retrieved is after using a Dashboard Prompt with 3 parameters.
The large number of rows is because these are month end reports, which is more like extraction.
The customer wants to implement this even though OBIEE does not work well with large number of rows, as there are only a couple of reports like this and it would be an expensive proposition to use a different reporting system for only 3-4 reports.
Hence, i am on the lookout for a way to implement this in OBIEE.
The other option is to directly download the report into CSV, without having to load all the records onto the browser first. To do the same, i read a couple of blog entries, but the steps mentioned were not clear. So any help on this front will also be great
Thanks,
Ab
Maybe you are looking for
-
I need to transfer music from another itunes library. My last compter hard drive failed. I have several hours of songs and I don't want to lose these. Is there a way to transfer music from another libray to current? I have recovered the purchased mus
-
I just switched from years of iPhones to a Blackberry Z30. I really like the OS but there are still a couple of issues that i hope will be solved in the near future. I travel a lot and some things are of great importance to me. Is there any way of ha
-
How do I save a non editable version of a form
I want to make a form non-editable after I have filled in the fields. How do I do this?
-
Including "job title" as one of the default fields for iOS 8 Contacts
For the "Contacts" on my MacBook Pro, I am able to set things up so that, when I add a new entry, "Job Title" appears among the default fields, sparing me from having to go down to "add a field" in order to enter the "jon Title" for that particular e
-
Publishing to SWF changes the course size
Hey, I don't know if anyone else has experienced this problem, but when I publish my course, its changing the size of it. In my .cp file, I have the course set to 700x513 in order to fit in our LMS. When I publish, its coming out at 700x553 according