Do we have to create indexes on ODS if we report on ODS
Hello all,
I have a report that runs on infocube and then there are RRI from this report to detailed report which run on 3 other ODS.
So do i have to create indexes on ODS as well to improve performance?
Also if we have to create indexon ODS how do we determine whic infoobjects we have to use in the indexes?
Thanks in advance
Thanks voodi,
I guess that tells me exactly what I was looking for. Its just that we are running reports in production for first time so we have some performance issues which doing jump so was wondering.
are there any other things you think I can do to take care of performance.?
Do we create aggregates right away or after using the reports for few times? Do we create aggregates on ODS as well?
Thanks,
points assigned to both of you
Similar Messages
-
How to create indexes on ODS ?
Hello friends ,
Need some help .
Could any one please let me know how to create indexes on ODS ?
How Indexes are useful on ODS ?
Thanks in advance
RegardsDear Akshay,
Below is the information about indexes and there creation for ODS.
You can search a table for data records that satisfy certain search criteria faster using an index.
An index can be considered a copy of a database table that has been reduced to certain fields. This copy is always in sorted form. Sorting provides faster access to the data records of the table, for example using a binary search. The index also contains a pointer to the corresponding record of the actual table so that the fields not contained in the index can also be read.
The primary index is distinguished from the secondary indexes of a table. The primary index contains the key fields of the table and a pointer to the non-key fields of the table. The primary index is created automatically when the table is created in the database.
You can also create further indexes on a table in the ABAP Dictionary. These are called secondary indexes.Under Indexes, you can create secondary indexes by using the context menu in order to improve the load and query performance of the ODS object This is necessary if the table is frequently accessed in a way that does not take advantage of the sorting of the primary index for the access.
The database system sometimes does not use a suitable index for a selection, even if there is one. The index used depends on the optimizer used for the database system. You should therefore check if the index you created is also used for the selection (see How to Check if an Index is Used).).
Creating an additional index could also have side effects on the performance. This is because an index that was used successfully for selection might not be used any longer by the optimizer if the optimizer estimates (sometimes incorrectly) that the newly created index is more selective.
The indexes on a table should therefore be as disjunct as possible, that is they should contain as few fields in common as possible. If two indexes on a table have a large number of common fields, this could make it more difficult for the optimizer to choose the most selective index.
Leaving content frame.
With Regards,
Prafulla Singh -
ABAP Routine for Deleting and creating index for ODS in Process chains
Any pointers for the ABAP Routine code for deleting and creating index for ODS in Process chains.
Hi Sachin,
find the following ABAP code to delete ODS ondex.
data : v_ods type RSDODSOBJECT.
move 'ODSname' to v_ods .
CALL FUNCTION 'RSSM_PROCESS_ODS_DROP_INDEXES'
EXPORTING
I_ODS = v_ods.
To create index:
data : v_ods type RSDODSOBJECT.
move 'ODSname' to v_ods .
CALL FUNCTION 'RSSM_PROCESS_ODS_CREA_INDEXES'
EXPORTING
I_ODS = v_ods.
hope it helps....
regards,
Raju -
Need info related to creating indexes on ODS.
Hi All,
I have transported manually created secondary indexes on my ODS to quality system.
Now I have a requirement where i have to optimise the query performance.
In quality, I activated the ODS.There was data in the ODS before creating of index.
But still my Query is taking too long to display records.
Now my main doubt is since I have transported the manually created indexes,
when these indexes will start working whether at the time of data loading or after the activation of ODS?
Please advice on how i can optimise the query performance.Hi Priyanka,
I think, your indexes should be working immediately after the transport.
You can always create indexes even when there is preexisting data. As soon as you save the indexs they should be created on the data base.
You can check whether your indexes are being used by your query in RSRT in display run schedule in data manager in execute and debug mode (Check the execution plan).
You can check whether your indexes have been created in ODS Active table in SE11 in index maintainance and also in DB02 , I believe.
To optimize your query performance - Filter in the queries should be on the primary indexes or the seconday indexes. You will get to know whether the indexes are bein guse and the cost saving by the usage of these indexes in the execution plan.Try to create indexes only when absolutely essential , because as mentioned in the post below, it affect loading performance, since it has to create these indexes for the newly loaded data
If you are using Oracle data base you can consider partitioning your infoprovider
If you are using DB2 - You can try multi dimensional clustering
You can choose appropriate read mode and Cache mode for the query
You can archive historic data no longer reported, to increase the reporting speed
You can design the query correctly - with correct placement of filters
Points to note when creating secondary indices is the order and the number of characteristics in the secondary indices is also the determining factor for the usage of the index by your query
Hope this helps,
Best regards,
Sunmit. -
How to copy the queries that i have created for an ods to cube?
Hi all,
My queries are generated using ods. there are 20-30 queries generated on that ods. And right now i have created a cube based on that ods. And i wanted to know is there any way that i can copy all the queries that have been created in the ods to the cube. Since my cube contains the same keyfigure, characters to that of that ods.
thanxs
harithaDear Haritha
You have a T-code RSZC
Give the name of the source InfoProvider and the target InfoProvider...
Make sure that the target InfoProvider is having all the characteristics and key figures or more than in the Source InfoProvider...
This works...
Regards
Gajendra -
Create Index Step taking long time
I have a create index step in process chain for a infocube. The Create Index step is takes more time. The requests in this cube are rolled up and compressed. The batch process are also avaliable. But still the create index step takes more time. Any suggestions to reduce the time of create index step?
Hi,
If you have more Data in Cube then it will take some time..check any Dumps in ST22 and SM21. Ask Basis Team any Error messages in DB02 and check.
Else goto to RSRV
Tests in Transaction RSRV --> Database --> Database Indices of an InfoCube and Its Aggregates
Give Cube name and execute it and see the errors then if any RED color, ckick on Repaire Icon and see.
Thanks
Reddy -
When do I really need to create indexes for a table?
Once I was talking to a dba in a conference.
He told me that not always I have to create indexes for a single table, it depends of its size.
He said that Oracle read registers in blocks, and for a small table Oracle can read it fully, in a single operation, so in those cases I don't need indexes and statistcs.
So I would like to know how to calculate it.
When do I really need to create indexes for a table?
If someone know any documment that explain that, or have some tips, I'd aprecciate.
Thanks.
P.S.: The version that I'm using is Oracle 9.2.0.4.0.Hi Vin
You mentioned so many mistakes here, I don't know where to begin ...
vprabhu_2000 wrote:
There are different kinds of Index. B-tree Index is by default. Bit map index, function based index,index organized table.
B-tree index if the table is large This is incorrect. Small tables, even those consisting of rows within just one block, can benefit from an index. There is no table size too small in which an index might not be benefical. William Robertson in his post references links to my blog where I discuss this.
and if you want to retrieve 10 % or less of data then B-tree index is good. This is all wrong as well. A FTS on a (say) million row table could very well be more efficient when retrieving (say) just 1% of data. An index could very well be more efficient when retrieving 100% of data. There's nothing special about 10% and there is no such magic number ...
>
Bit Map Index - On low cardinality columns like Sex for eg which could have values Male,Female create a bit map index. Completely and utterly wrong. A bitmap index might be the perfect type of index, better than a B-Tree, even if there are (say) 100,000 distinct values in the table. That a bitmap index is only suitable for low cardinality columns is just not true. And what if it's an OLTP application, with lot's of concurrent DML on the underlining table, do you really think a bitmap index would be a good idea ?
>
You can also create an Index organized table if there are less rows to be stored so data is stored only once in index and not in table. Not sure what you mean here but an IOT can potentially be useful if you have very large numbers of rows in the table. The number of rows has nothing to do with whether an IOT is suitable or not.
>
Hope this info helps. Considering most of it is wrong, I'm not sure it really helps at all :(
Cheers
Richard Foote
http://richardfoote.wordpress.com/ -
Hi,
I am trying to create rdlc file programmatically. Using Memory Table as dataset. Here is my code
' For each field in the resultset, add the name to an array listDim m_fields AsArrayList
m_fields = NewArrayList()
Dim i AsIntegerFor i = 0 To tbdataset.Tables(0).Columns.Count - 1
m_fields.Add(tbdataset.Tables(0).Columns(i).ColumnName.ToString)
Next i
'Create Report 'http://schemas.microsoft.com/sqlserver/reporting/2008/01/reportdefinition'http://schemas.microsoft.com/sqlserver/reporting/2010/01/reportdefinition' Open a new RDL file stream for writingDim stream AsFileStream
stream = File.OpenWrite("D:\MyTestReport2.rdlc")
Dim writer AsNewXmlTextWriter(stream, Encoding.UTF8)
' Causes child elements to be indented
writer.Formatting = Formatting.Indented
' Report element
writer.WriteProcessingInstruction("xml", "version=""1.0"" encoding=""utf-8""")
writer.WriteStartElement("Report")
writer.WriteAttributeString("xmlns", Nothing, "http://schemas.microsoft.com/sqlserver/reporting/2010/01/reportdefinition")
writer.WriteAttributeString("xmlns:rd", "http://schemas.microsoft.com/SQLServer/reporting/reportdesigner")
writer.WriteStartElement("ReportSections")
writer.WriteStartElement("ReportSection")
writer.WriteElementString("Width", "11in")
writer.WriteStartElement("Body")
writer.WriteElementString("Height", "5in")
writer.WriteStartElement("ReportItems")
writer.WriteStartElement("Tablix")
writer.WriteAttributeString("Name", Nothing, "Tablix1")
writer.WriteElementString("Top", ".5in")
writer.WriteElementString("Left", ".5in")
writer.WriteElementString("Height", ".5in")
writer.WriteElementString("Width", (m_fields.Count * 1.5).ToString() + "in")
writer.WriteStartElement("TablixBody")
' Tablix Columns
writer.WriteStartElement("TablixColumns")
ForEach fieldName In m_fields
writer.WriteStartElement("TablixColumn")
writer.WriteElementString("Width", "1.5in")
writer.WriteEndElement() ' TableColumnNext fieldName
writer.WriteEndElement() ' TablixColumns' Header Row
writer.WriteStartElement("TablixRows")
writer.WriteStartElement("TablixRow")
writer.WriteElementString("Height", ".25in")
writer.WriteStartElement("TablixCells")
ForEach fieldName In m_fields
writer.WriteStartElement("TablixCell")
writer.WriteStartElement("CellContents")
writer.WriteStartElement("Textbox")
writer.WriteAttributeString("Name", Nothing, "Header" + fieldName)
' writer.WriteAttributeString("CanGrow", True)' writer.WriteAttributeString("Keeptogether", True)
writer.WriteStartElement("Paragraphs")
writer.WriteStartElement("Paragraph")
writer.WriteStartElement("TextRuns")
writer.WriteStartElement("TextRun")
writer.WriteElementString("Value", fieldName)
writer.WriteStartElement("Style")
writer.WriteElementString("TextDecoration", "Underline")
writer.WriteElementString("PaddingTop", "0in")
writer.WriteElementString("PaddingLeft", "0in")
writer.WriteElementString("LineHeight", ".5in")
''writer.WriteElementString("Width", "1.5in")''writer.WriteElementString("Value", fieldName)
writer.WriteEndElement() ' Style
writer.WriteEndElement() ' TextRun
writer.WriteEndElement() ' TextRuns
writer.WriteEndElement() ' Paragraph
writer.WriteEndElement() ' Paragraphs
writer.WriteEndElement() ' TexBox
writer.WriteEndElement() ' CellContents
writer.WriteEndElement() ' TablixCellNext
writer.WriteEndElement() ' TablixCells
writer.WriteEndElement() ' TablixRow'writer.WriteEndElement() ' TablixRows Do not close Rows tag here colse it after details'End of Headers'Details Rows'writer.WriteStartElement("TablixRows") Since Rows tag in header is not closed not need to open fresh tag
writer.WriteStartElement("TablixRow")
writer.WriteElementString("Height", ".25in")
writer.WriteStartElement("TablixCells")
ForEach fieldName In m_fields
writer.WriteStartElement("TablixCell")
writer.WriteStartElement("CellContents")
writer.WriteStartElement("Textbox")
writer.WriteAttributeString("Name", Nothing, fieldName)
' writer.WriteAttributeString("CanGrow", True)' writer.WriteAttributeString("Keeptogether", True)
writer.WriteStartElement("Paragraphs")
writer.WriteStartElement("Paragraph")
writer.WriteStartElement("TextRuns")
writer.WriteStartElement("TextRun")
'writer.WriteElementString("Value", fieldName)
writer.WriteElementString("Value", "=Fields!" + fieldName + ".Value")
writer.WriteStartElement("Style")
writer.WriteElementString("TextDecoration", "Underline")
writer.WriteElementString("PaddingTop", "0in")
writer.WriteElementString("PaddingLeft", "0in")
writer.WriteElementString("LineHeight", ".5in")
''writer.WriteElementString("Width", "1.5in")''writer.WriteElementString("Value", fieldName)
writer.WriteEndElement() ' Style
writer.WriteEndElement() ' TextRun
writer.WriteEndElement() ' TextRuns
writer.WriteEndElement() ' Paragraph
writer.WriteEndElement() ' Paragraphs
writer.WriteEndElement() ' TexBox
writer.WriteEndElement() ' CellContents
writer.WriteEndElement() ' TablixCellNext
writer.WriteEndElement() ' TablixCells
writer.WriteEndElement() ' TablixRow
writer.WriteEndElement() ' TablixRows'End of Details Rows
writer.WriteEndElement() ' TablixBody
writer.WriteStartElement("TablixRowHierarchy")
writer.WriteStartElement("TablixMembers")
writer.WriteStartElement("TablixMember")
' Group
writer.WriteElementString("KeepWithGroup", "After")
writer.WriteEndElement() ' TablixMember' Detail Group
writer.WriteStartElement("TablixMember")
writer.WriteStartElement("Group")
writer.WriteAttributeString("Name", Nothing, "Details")
writer.WriteEndElement() ' Group
writer.WriteEndElement() ' TablixMember
writer.WriteEndElement() ' TablixMembers
writer.WriteEndElement() ' TablixRowHierarchy
writer.WriteStartElement("TablixColumnHierarchy")
writer.WriteStartElement("TablixMembers")
'writer.WriteStartElement("TablixMember")ForEach fieldName In m_fields
writer.WriteStartElement("TablixMember")
writer.WriteEndElement() ' TablixMemberNext' writer.WriteEndElement() ' TablixMember
writer.WriteEndElement() ' TablixMembers
writer.WriteEndElement() ' TablixColumnHierarchy
writer.WriteElementString("DataSetName", "tbdataset")
writer.WriteEndElement() ' Tablix
writer.WriteEndElement() ' ReportItems
writer.WriteEndElement() ' Body
writer.WriteStartElement("Page")
' Page Header Element
writer.WriteStartElement("PageHeader")
writer.WriteElementString("Height", "1.40cm")
writer.WriteStartElement("ReportItems")
writer.WriteStartElement("Textbox")
writer.WriteAttributeString("Name", Nothing, "Textbox1")
writer.WriteStartElement("Paragraphs")
writer.WriteStartElement("Paragraph")
writer.WriteStartElement("TextRuns")
writer.WriteStartElement("TextRun")
writer.WriteElementString("Value", Nothing, "ABC CHS.")
writer.WriteEndElement() ' TextRun
writer.WriteEndElement() ' TextRuns
writer.WriteEndElement() ' Paragraph
writer.WriteEndElement() ' Paragraphs
writer.WriteEndElement() ' TextBox
writer.WriteEndElement() ' ReportItems
writer.WriteEndElement() ' PageHeader
writer.WriteEndElement() ' Page
writer.WriteEndElement() ' ReportSection
writer.WriteEndElement() ' ReportSections' DataSources
writer.WriteStartElement("DataSources")
writer.WriteStartElement("DataSource")
writer.WriteAttributeString("Name", Nothing, "tbdata")
writer.WriteStartElement("DataSourceReference")
writer.WriteEndElement() ' DataSourceReference
writer.WriteEndElement() ' DataSource
writer.WriteEndElement() ' DataSources'DataSet
writer.WriteStartElement("DataSets")
writer.WriteStartElement("DataSet")
writer.WriteAttributeString("Name", Nothing, "tbdataset")
writer.WriteStartElement("Query")
writer.WriteElementString("DataSourceName", Nothing, "tbdata")
'writer.WriteElementString("CommandText", Nothing, "/* Local Query */")
writer.WriteElementString("CommandText", Nothing, "TableDirect")
writer.WriteEndElement() ' Query'Fields
writer.WriteStartElement("Fields")
ForEach fieldName In m_fields
writer.WriteStartElement("Field")
writer.WriteAttributeString("Name", Nothing, fieldName)
writer.WriteElementString("DataField", fieldName)
writer.WriteElementString("rd:TypeName", fieldName.GetType.ToString)
writer.WriteEndElement() ' FieldNext
writer.WriteEndElement() ' Fields' rd datasetinfo
writer.WriteEndElement() ' DataSet
writer.WriteEndElement() ' DataSets
writer.WriteEndElement() ' Report' Flush the writer and close the stream
writer.Flush()
stream.Close()
'Convert to StreamDim myByteArray AsByte() = System.Text.Encoding.UTF8.GetBytes("D:\MyTestReport2.rdlc")
Dim ms AsNewMemoryStream(myByteArray)
'Supply Stream to ReportViewer
ReportViewer1.LocalReport.LoadReportDefinition(ms)
ReportViewer1.LocalReport.Refresh()When I open rdlc in designer I get following error"Data at the root level is invalid."When I run the aspx I get following error
An error occurred during local report processing.
The definition of the report '' is invalid.
The definition of this report is not valid or supported by this version of Reporting Services.
The report definition may have been created with a later version of Reporting Services, or contain content that is not well-formed or not valid based on Reporting Services schemas.
Details: Data at the root level is invalid. Line 1, position 1.
Can anybody guide me?Hi Wendy Fu,
Thanks for your feed back. I could see Microsoft.ReportViewer.ProcessingObjectModel.dll to add as reference to my project. Actually I can open generated rdlc in designer, at run time I get error. I could not make out where is the exact mistake out of three
options flashed.
The definition of this report is not valid or supported by this version of Reporting Services.
The report definition may have been created with a later version of Reporting Services
or contain content that is not well-formed or not valid based on Reporting Services schemas
Details: Data at the root level is invalid
My web config has following references
<add assembly="Microsoft.ReportViewer.WebForms, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845DCD8080CC91"/>
<add assembly="Microsoft.ReportViewer.Common, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845DCD8080CC91"/>
May be I have to change these versions to 9 or 10.
First I will try adding Microsoft.ReportViewer.ProcessingObjectModel.dll .
Once thanks for your reply.
Races -
Don't have primary Key in Target table getting errror while creating index
Hi All,
I don't have primary key column in target table, while exicuting mapping I got a error while creating INDEX.
Could you please help how to slove thisHi,
That is a process definition issue.
If you don't have a PK then:
1) or you don't execute updates
2)or you have an alternate Key to update it.
Case 1) just change the KM to IKM Control Append
Case 2) at interface, go to each column what is Alternate Key and check it as key (click at column and check the box Key at bottom of propriety window).
Does it work to you? -
Is it possible to created secondary indexes on ODS in Production
Hi,
Is it possible to created secondary indexes on ODS in Production System. I need to create secondary indexes on ODS but it is already in production. Hence Can I directly create secondary indexes without transportation from dev to production?Hi,
Secondary Indexes for DSO can be transported. For the transport the DSO objects needs to be transported (R3TR ODSO <technical name>.
Additionally, secondary Indexes are necessary quite often for DSO tables, which can be transported. In few cases you need indexes on other BW tables, but they cannot be transported. Never create additional Indexes on InfoCube tables (like E- and F-Fact tables and Dimension tables)
Thanks & B.R.
Vince -
BI Loading to Cube Manually with out creating Indexes.
BW 3.5
I have a process chain schedules overnight which loads data from the InfoCubes from the ODS after loading to the staging and transformation layer
The data loaded into the InfoCube is scheduled in the process chain as
delete Index > delete contents of the cube> Load Data to the Cube --> Create Index.
Tha above process chain load to cube normally takes 5 - 6 hrs.
The only concern I have is at times if the process chain fails at the staging layer and transformation layer then I have to rectify the same manually.
After rectifying the error, now I have to load the data to the Cube.
I have only left with couple of hours say 2-3 hrs to complete the process chain of the load to the cube because of business hours.
Kindly let me know in the above case where I have short of time to load data to the cube via process chain
can I manually delete the contents of the cube and load the data to the cube. Here I will not be deleting the existing index(s) and create Index(s) after loading to the Cube because creation of Index normally takes a long time which I can avoid where I am short of time.
Can I do the above at times and what are the impacts
If the load to the InfoCube schedules via process chain the other normal working days. Is it going to fail or it will go through
Also deleting contents of the cubes deletes the indexes.
Thanks
Note: As far I understand that Index are created to improve the performance at loading and query performance level.
your input will be appreciated.Hi Pawan,
Please find below my views in bold
BW 3.5
I have a process chain schedules overnight which loads data to the InfoCubes from the ODS after loading to the staging and transformation layer
The data loaded into the InfoCube is scheduled in the process chain as
delete Index > delete contents of the cube> Load Data to the Cube --> Create Index.
I assume you are deleting the entire contents of the cube. If this is the normal pattern of loads to this cube and if there are no other loads to this cube you may consider configuring a setting in the infocube which " Delete InfoCube indexes before each data load and then refresh" .This setting you would find in Performance tab in create index batch option. Read F1 help of the checkbox. It will provide with more info.
Tha above process chain load to cube normally takes 5 - 6 hrs.
The only concern I have is at times if the process chain fails at the staging layer and transformation layer then I have to rectify the same manually.
After rectifying the error, now I have to load the data to the Cube.
I have only left with couple of hours say 2-3 hrs to complete the process chain of the load to the cube because of business hours.
Kindly let me know in the above case where I have short of time to load data to the cube via process chain
can I manually delete the contents of the cube and load the data to the cube. YES, you can Here I will not be deleting the existing index(s) and create Index(s) after loading to the Cube because creation of Index normally takes a long time which I can avoid where I am short of time.
Can I do the above at times and what are the impacts Impacts :Lower query performance and loading performance as you mentioned
If the load to the InfoCube schedules via process chain the other normal working days. Is it going to fail or it will go through
I dont probably understand the question above, but i assume you mean, that if you did a manual load will there be a failure next day. - THERE WOULDNT
Also deleting contents of the cubes deletes the indexes.
YES it does
Thanks
Pavan - You can skip creating indices, but you will have slower query performance. However if you have no further loads to this cube, you could create your indices during business hours as well. I think, the building of indices demands a lock on the cube and since you are not loading anything else u should b able to furnish it. Lastly, is there no way you can remodel this cube and flow...do you really need to have full data loads?
Note: As far I understand that Index are created to improve the performance at loading and query performance level. TRUE
your input will be appreciated.
Hope it helps,
Regards,
Sunmit. -
Create Index to use Like Clause
Hi All,
I want one of my query to use a index which runs with a LIKE Clause. I have not done that before but i have heard and seen through forums that its possible to create indexes for a column with Like Clause using function based index.
Function
Request the forum users to achieve my objective. Let me list down what i have done.
Function
CREATE OR REPLACE FUNCTION RND_LIKE(P_NO IN VARCHAR2)
RETURN VARCHAR2 IS
RESULT VARCHAR2(240);
BEGIN
RETURN P_NO||'%';
END RND_LIKE;
SELECT ENAME FROM EMP WHERE ENAME LIKE RND_LIKE('A')
Here based on this function i want to create a function based index and force the same to my query. Request the forum users to help me out in this.
Thanks
Edited by: ramarun on Dec 18, 2009 9:26 PMIn the case you had there , Oracle would use an index on ename in a query if you were to type A% in the ename item on a Form. You wouldn't need a function index for that.
Here's the link to the documentation to create a function based index http://download-uk.oracle.com/docs/cd/B28359_01/server.111/b28310/indexes003.htm#i1006674 -
What is the difference between creating index on cube and infopkg in PC
Hi All
I have process chain in which after executing infopkg(load data infopkg),creating index on cube i.e Object type is Cube ,for which execution time is 1 hour,then after (subsequent step ) again create index at this time object type is "infopkg"
execute infopkg for which time is 2 minnutes,what is the diffrence between these two,if i reome create index from cube i can save 1 hour time,I have to reveiew this
chain for performance,plese post me your thoughts,it's argent,your help will be heighly appreciatable.Thanks in advance.
regards
EABy default once u use create index process type Object type has Infopackage - change it to Cube tech name.
If its Cube - Indexes will be deleted or created for all the date in the cube.
Message was edited by:
Jr Roberto -
I am trying to create a new ods ?
hi all,
Can anyone help me out. i am trying to create a new ods.
Where in i have characteristics and keyfigures.
In ods we have - data fields / key fields.
1. can data fields can contain characteristics and keyfigures
2. can key fields can contain key figures as well characteristics?
In which scenario should characteristics should be included in the key fields.
If u have any docs abt ods send it acrros to my email id [email protected]
regds
harithaHi Haritha,
for your first question... YES , You can take charecterstics as Data fields.. and you can create ODS...
but for your second question , NO, You can't take key figures as key fields..
because.. the example.. Charecterstics means.. on charecterstics based only we are analysis the reports... so, on which based you want report.. suppose for example.. EMPLOY NUMBER based on you want you can take.. that EMP id as KEY FIELDS.. otherwise.. you want to see the report on DEPT ID wise.. you can take.. that Dept Id as key fields.. and then Emp ID you can take as Data field ...
so, completely based on your reporting requirement you can put charecterstics as keyfields or datafields...
but .. key figures means.. performance indicators.. these always you need to take as datafields only..
regards
@jay.. -
Performance issue with drop and re-create index
My database table has about 2 million records. The index in the table was not optmized, so we created a new index lets call it index2. So this table now was the original index (index1) and the index2. We then inserted data into this table from the other box. It was running for a few weeks.
Suddenly we noticed that a query which used to take a few seconds now took more than a minute. The execution plan was using the index2 which technically should be faster. We checked if the statistics were upto date and it was. So then we dropped the new index, re-ran the query and it completed in 10 sec's. It was usign the old index. This puzzled me since the point of the index2 was to make it better. So then we re-created index2 and genrated stats for the index. Re-ran the query and it completed in 5 sec's.
Everytime we timed to run the query, I shutdown and restarted the box to clear all cache's. So all the time I have specified are pure time's and not cached. The execution plan using index2 taking 1 min and 5 sec's are nearly the same, with very minior difference in cost and cardnitality. Any ideas why index2 took 1 min before and after drop and create again takes only 5 sec.
The reason I want to find the cause is to ensure that this doesn't happen again, since its impossible for me to re-create the index everytime I see this issue. Any thoughts would be helpful.Firstly the indexes are different index1 is only on the time column, where as index2 is a composite index consisting of 3 columns.
Here are the details. The test that I did were last friday, 3/31. Yesterday and today when I executed the same query I get more increased times, yesterday it took 9 sec amd today 17 sec. The stats job kicked in on both days and is upto date. This table, nothing gets deleted. Only added.
3/31
Original
Elapsed: 00:01:02.17
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6553 Card=9240 Bytes
=203280)
1 0 SORT (UNIQUE) (Cost=6553 Card=9240 Bytes=203280)
2 1 INDEX (FULL SCAN) OF 'EVENT_NA_TIME_ETYPE' (NON-UNIQ
UE) (Cost=15982 Card=2306303 Bytes=50738666)
drop index EVENT_NA_TIME_ETYPE
Elapsed: 00:00:11.91
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=7792 Card=9275 Bytes
=204050)
1 0 SORT (UNIQUE) (Cost=7792 Card=9275 Bytes=204050)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'EVENT' (Cost=2092
Card=2284254 Bytes=50253588)
3 2 INDEX (RANGE SCAN) OF 'EVENT_TIME_NDX' (NON-UNIQUE
) (Cost=6740 Card=2284254)
create index EVENT_NA_TIME_ETYPE ON EVENT(NET_ADDRESS,TIME,EVENT_TYPE);
BEGIN
SYS.DBMS_STATS.GENERATE_STATS('USER','EVENT_NA_TIME_ETYPE',0);
end;
Elapsed: 00:00:05.14
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6345 Card=9275 Bytes
=204050)
1 0 SORT (UNIQUE) (Cost=6345 Card=9275 Bytes=204050)
2 1 INDEX (FULL SCAN) OF 'EVENT_NA_TIME_ETYPE' (NON-UNIQ
UE) (Cost=12878 Card=2284254 Bytes=50253588)
4/3
Elapsed: 00:00:09.70
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6596 Card=9316 Bytes
=204952)
1 0 SORT (UNIQUE) (Cost=6596 Card=9316 Bytes=204952)
2 1 INDEX (FULL SCAN) OF 'EVENT_NA_TIME_ETYPE' (NON-UNIQ
UE) (Cost=11696 Card=2409400 Bytes=53006800)
Statistics
0 recursive calls
0 db block gets
11933 consistent gets
9676 physical reads
724 redo size
467 bytes sent via SQL*Net to client
503 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
3 rows processed
4/4
Elapsed: 00:00:17.99
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6681 Card=9421 Bytes
=207262)
1 0 SORT (UNIQUE) (Cost=6681 Card=9421 Bytes=207262)
2 1 INDEX (FULL SCAN) OF 'EVENT_NA_TIME_ETYPE' (NON-UNIQ
UE) (Cost=12110 Card=2433800 Bytes=53543600)
Statistics
0 recursive calls
0 db block gets
12279 consistent gets
9423 physical reads
2608 redo size
467 bytes sent via SQL*Net to client
503 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
3 rows processed
SQL> select index_name,clustering_factor,blevel,leaf_blocks,distinct_keys from u ser_indexes where index_name like 'EVENT%';
INDEX_NAME CLUSTERING_FACTOR BLEVEL LEAF_BLOCKS DISTINCT_KEYS
EVENT_NA_TIME_ETYPE 2393170 2 12108 2395545
EVENT_PK 32640 2 5313 2286158
EVENT_TIME_NDX 35673 2 7075 2394055
Maybe you are looking for
-
Transfer iCloud data from 4 to 4S
I'm having trouble with my iCloud data transfer. I'm trying to send info from my iPhone 4 to my 4S. Can you make a comprehensible step-by-step set of instructions on how to transfer data from a previous generation to a newer one?
-
Latex Equations in Keynote as PDF.
I use Latex-IT or Latex Equation Editor to create equations as PDF files and drop them in Keynote. They work file for some time. But, after a few days or so, when I open the keynote files, the PDF equations are not displayed properly. Sometimes, this
-
When I try to delete a downloaded app the X doesn't appear
I downloaded some free apps for my daughters new ipod touch, tried to delete them following the user manual, but when I press and hold down on the app, it starts to wiggle, but no "x" appears to delete it.
-
DB Console: Monitoring Configuration
I was forced to use the DB console for one of the RAC database's in our environment due to a number of reasons. I'm familiar with GC but I've never used the DB Console. First, I wasn't able to find info on how to do a RAC configuration so I simply fo
-
I want to be able to download pictures from my camera and caption them on my iPad2. Is this possible? Having trouble finding an app for that...