Pivot table with very large number of columns
Hello,
here is the situation:
One table that contains raw data; from this table I feed one with extract information (3 fields); I have to turn the content in a pivot table
Ro --- Co --- Va
A A 1
A B 1
A C 2
B A 11
Turned in
A B C...
A 1 1 2
B 11 null null
To do this I do a query like:
select r, sum(decode(c,'A',Va) COLA, sum(decode(c,'B',Va) COLB , sum(decode(c,'C',Va) COLC,.... sum(decode(c,'XYZ',Va) COLXYZ from table group by r
The statement is generated by a script (cfmx) and it works until I reach a query that try to have 672 values for c; which means 672 columns...
Oracle doesn't like that: ORA-01467: sort key too long
I like this way has it is getting the result fast.
I have tried different solution a the CFMX level with for that specific query, I got timeout (query table with loop on co within loop on ro)
Is there any work around?
I am using Oracle 9i.
Tahnk you!
insert into extracted_data select c, r, v, p from full_data where <specific_clause>
The values for C are from a query: select disctinct c from extracted_data
and it is the same for R
R and C are varchar2(3999)
I suppose that I can split on the first letter of the C column as:
SELECT r, low.cola, low.colb, . . ., low.colm,
high.coln, high.colo, . . ., high.colz
FROM (SELECT r, SUM(DECODE(c, 'A', va)) cola, . . .
SUM(DECODE(c, 'M', va)) colm
FROM table
WHERE c like 'A%'
GROUP BY r) Alpha_A,
(SELECT r, SUM(DECODE(c, 'N', va)) coln, . . .
SUM(DECODE(c, 'Z', va)) colz
FROM table
WHERE c like 'B%'
GROUP BY r) Alpha_B,
(SELECT r, SUM(DECODE(c, 'N', va)) coln, . . .
SUM(DECODE(c, 'Z', va)) colz
FROM table
WHERE c like 'C%'
GROUP BY r) Alpha_C
(SELECT r, SUM(DECODE(c, 'zN', va)) coln, . . .
SUM(DECODE(c, 'zZ', va)) colz
FROM table
WHERE c like 'Z%'
GROUP BY r) Alpha_Z
WHERE alpha_A.r = alpha_B.r and apha_a.r = alpha_C.r ... and alpha_a.r = alpha_z.r
I will have 27 select statement joined... I have to check if even like that I will not reach the limit within one of the statement select
"in real life"
select GRPW.r, GRPW.W0, GRPC.C0, GRPC.C1 from
(select r, sum(decode(C, 'Wall, unspecified',cases)) W0 from tmp_maqueje where upper(C) like 'W%' group by r) GRPW,
select r,
sum(decode(C, 'Ceramic tiles, indoors',cases)) C0,
sum(decode(C, 'Cement surface, outdoors (Concrete/cement block, see Structural element, A11)',cases)) C1
from tmp_maqueje where upper(C) like 'C%' group by r) GRPC
where GRPW.r = GRPC.r
order by GRPW.r, GRPW.W0, GRPC.C0, GRPC.C1
Message was edited by:
maquejp
Similar Messages
-
JDev: af:table with a large number of rows
Hi
We are developing with JDeveloper 11.1.2.1. We have a VO that returns > 2.000.000 of rows and that we display in a af:table with access mode 'scrollable' (the default) and 'in Batches of' 101. The user can select one row and do CRUD operations in the VO with popups. The application works fine but I read that scroll very large number of rows is not a good idea because can cause OutOfMemory exception if the user uses the scroll bar many times. I have tried with access mode in 'Range Paging' but the application works in strange ways. Sometimes when I select a row to edit, if the selected row is the number 430 in the popup is show it the number 512 and when I want to insert a new row throws this exception:
oracle.jbo.InvalidOperException: JBO-25053: No se puede navegar con filas no enviadas en RangePaging RowSet.
at oracle.jbo.server.QueryCollection.get(QueryCollection.java:2132)
at oracle.jbo.server.QueryCollection.fetchRangeAt(QueryCollection.java:5430)
at oracle.jbo.server.ViewRowSetIteratorImpl.scrollRange(ViewRowSetIteratorImpl.java:1329)
at oracle.jbo.server.ViewRowSetIteratorImpl.setRangeStartWithRefresh(ViewRowSetIteratorImpl.java:2730)
at oracle.jbo.server.ViewRowSetIteratorImpl.setRangeStart(ViewRowSetIteratorImpl.java:2715)
at oracle.jbo.server.ViewRowSetImpl.setRangeStart(ViewRowSetImpl.java:3015)
at oracle.jbo.server.ViewObjectImpl.setRangeStart(ViewObjectImpl.java:10678)
at oracle.adf.model.binding.DCIteratorBinding.setRangeStart(DCIteratorBinding.java:3552)
at oracle.adfinternal.view.faces.model.binding.RowDataManager._bringInToRange(RowDataManager.java:101)
at oracle.adfinternal.view.faces.model.binding.RowDataManager.setRowIndex(RowDataManager.java:55)
at oracle.adfinternal.view.faces.model.binding.FacesCtrlHierBinding$FacesModel.setRowIndex(FacesCtrlHierBinding.java:800)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
<LoopDiagnostic> <dump> [8261] variableIterator variables passivated >>> TrackQueryPerformed def
<LifecycleImpl> <_handleException> ADF_FACES-60098:El ciclo de vida de Faces recibe excepciones no tratadas en la fase RENDER_RESPONSE 6
What is the best way to display this amount of data in a af:table and do CRUD operations?
Thanks
Edited by: 972255 on 05/12/2012 09:51Hi,
honestly, the best way is to provide users with an option to filter the result set displayed in the table to reduce the result set size. No-one will query 2.00.000 rows using the table scrollbar.
So one hint for optimization would be a query form (e.g. af:query)
To answer your question "srollable" vs. "page range", see
http://docs.oracle.com/cd/E21043_01/web.1111/b31974/bcadvvo.htm#ADFFD1179
Pay attention to what is written in the context of +"The range paging access mode is typically used for paging through read-only row sets, and often is used with read-only view objects.".+
Frank -
Problem in compilation with very large number of method parameters
I have java file which I created using WSDL2Java. Since the actual WSDL has a complex type with a large number of elements(around 600) in it, Consequently the resulting java file(from WSDL2Java) has a method that takes 600 parameters of various types. When I try to compile it using javac at command prompt, it says "Too many parameters" and doesn't compile. The same is compiling successfully using JBuilder X . The only way I could compile successfully at command prompt is by reducing the number of parameters to around 250 but unfortunately that it's not a workable solution. Does Sun specify any upper bound on number of parameters that can be passed to a method?
... a method that takes 600 parameters ...Not compatible with the spec, see Method Descriptors.
When I try to compile it using javac at
command prompt, it says "Too many parameters" and
doesn't compile.As it should.
The same is compiling successfully using JBuilder X .If JBuilder produces a class file, that class file may very well be invalid.
The only way I could compile
successfully at command prompt is by reducing the
number of parameters to around 250Which is what the spec says.
but unfortunately that it's not a workable solution.Pass an array of objects - an array is just one object.
Does Sun specify
any upper bound on number of parameters that can be
passed to a method?Yes. -
Creating a table with a variable number of columns
Hi,
I am working on a form and I want to allow the user to add tables to the form. If I give them a base table (for instance, a table with two rows and columns) is there any way to allow the user to add columns to the table. I can add new instances of rows, but I need the number of columns to be variable as well.
I am working in Livecycle Designer ES2Hi,
check this article.
http://forms.stefcameron.com/2006/10/28/scripting-table-columns/
Hope this helps. -
Reading a csv file with a large number of columns
Hello
I have been attempting to read data from large csv files with 38 columns by reading a line using readline and scanning the linebuffer using scan.
The file size can be up to 100 MB.
Scan does not seem support the large number of fields.
Any suggestions on reading the 38 comma separated fields. There is one header line in the file.
Thanks
Solved!
Go to Solution.see if strtok() is useful http://www.elook.org/programming/c/strtok.html
-
How to clone data with in Table with dynamic 'n' number of columns
Hi All,
I've a table with syntax,
create table Temp (id number primary key, name varchar2(10), partner varchar2(10), info varchar2(20));
And with data like
insert itno temp values (sequence.nextval, 'test', 'p1', 'info for p1');
insert into temp values (sequence.nextval, 'test', 'p2', 'info for p2');
And now, i need to clone the data in TEMP table of name 'test' for new name 'test1' and here is my script,
insert into Temp select sequence.nextval id, 'test1' name, partner, info from TEMP where name='test1';
this query executed successfully and able to insert records.
The PROBLEM is,
if some new columns added in TEMP table, need to update this query.
How to clone the data with in the table for *'n' number of columns and*
some columns with dynamic data and remaining columns as source data.
Thanks & Regards
PavanPinnu.
Edited by: pavankumargupta on Apr 30, 2009 10:37 AMHi,
Thanks for the quick reply.
My Scenario, is we have a Game Details table. When ever some Game get cloned, we need to add new records in to that Table for the new Game.
As, the id will be primary key, this should populate from a Sequence (in our system, we used this) and Game Name will be new Game Name. And data for other columns should be same as Parent Game.
when ever business needs changes, there will be some addition of new columns in Table.
And with the existing query,
insert into Temp (id, name, partner, info) select sequence.nextval id, 'test1' name, partner, info from TEMP where name='test'_
will successfully add new rows but new added columns will have empty data.
so, is there any way to do this, i mean, some columns with sequence values and other columns with existing values.
One way, we can do is, get ResultSet MetaData (i'm using Java), and parse the columns. prepare a query in required format.
I'm looking for alternative ways in query format in SQL.
Thanks & Regards
PavanPinnu.
Edited by: pavankumargupta on Apr 30, 2009 11:05 AM
Edited by: pavankumargupta on Apr 30, 2009 11:05 AM -
Best Practice - Analyze table with very large partitions
We have a table that contains 100 partitions with about 20m rows in each. Right now the analyze is taking about 1 hour per partition. The table is used for reporting and will have a nightly load of the previous days data.
What would be the best way to analyze this table? Besides using a low value for ESTIMATE and using low GRANULARITY.
Thank You.Are you suggesting that the table is so big, its not feasible to analyze anymore?I'm suggesting it's not necessary. I think it's highly unlikely that a nightly load is going to change the stats in any meaningful way, unless you are loading millions of rows. the law of diminishing returns has kicked in.
Remember, the standard advice from Oracle is to gather statistics once and then only bother refreshing those stats when we need to. From Metalink note #44961.1:
"Given the 'best' plan is unlikely to change, frequent gathering statistics has no benefit. It does incur costs though."
What you might find useful is to export the stats from your last run before you do the new run (you should do this anyway). Then after the next stats refresh import both sets of stats into dummy schemas and compare them. If the difference is significant then you ought to keep analysing (especially if yours is a DSS or warehousing database). But if they are broadly the same then maybe it's time to stop.
Cheers, APC -
TableView performance with large number of columns
I notice that it takes awhile for table views to populate when they have a large number of columns (> 100 or so subjectively).
Running VisualVM based on CPU Samples, I see that the largest amount of time is spent here:
javafx.scene.control.TableView.getVisibleLeafIndex() 35.3% 8,113 ms
next is:
javfx.scene.Parent$1.onProposedChange() 9.5% 2,193 ms
followed by
javafx.scene.control.Control.loadSkinClass() 5.2% 1,193 ms
I am using JavaFx 2.1 co-bundled with Java7u4. Is this to be expected, or are there some performance tuning hints I should know?
Thanks,
- PatWe're actually doing some TableView performance work right now, I wonder if you could file an issue with a simple reproducible test case? I haven't seen the same data you have here in our profiles (nearly all time is spent on reapplying CSS) so I would be interested in your exact test to be able to profile it and see what is going on.
Thanks
Richard -
How to show data from a table having large number of columns
Hi ,
I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns.
Is it possible to design report in below format(half columns on one side of page, half on other side of page :
Column1
Data
Column11
Data
Column2
Data
Column12
Data
Column3
Data
Column13
Data
Column4
Data
Column14
Data
Column5
Data
Column15
Data
Column6
Data
Column16
Data
Column7
Data
Column17
Data
Column8
Data
Column18
Data
Column9
Data
Column19
Data
Column10
Data
Column20
Data
I am using Apex 4.2.3 version on oracle 11g xe.user2602680 wrote:
Please update your forum profile with a real handle instead of "user2602680".
I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns.
Is it possible to design report in below format(half columns on one side of page, half on other side of page :
Column1
Data
Column11
Data
Column2
Data
Column12
Data
Column3
Data
Column13
Data
Column4
Data
Column14
Data
Column5
Data
Column15
Data
Column6
Data
Column16
Data
Column7
Data
Column17
Data
Column8
Data
Column18
Data
Column9
Data
Column19
Data
Column10
Data
Column20
Data
I am using Apex 4.2.3 version on oracle 11g xe.
Yes, this can be achieved using a custom named column report template. -
Pivot table with variables columns
I need a helo to pivot table with variable columns,
I have a pivot table :
SELECT a.*
FROM (SELECT codigo_aluno,nome_aluno , id_curso,dia FROM c_frequencia where dia like '201308%') PIVOT (sum(null) FOR dia IN ('20130805' ,'20130812','20130819','20130826')) a
but I need to run the select with values for dia , getting from a other table :
SELECT a.*
FROM (SELECT codigo_aluno,nome_aluno , id_curso,dia FROM c_frequencia where dia like '201308%') PIVOT (sum(null) FOR dia IN (
select dia from v_dia_mes )) a
thank youThe correct answer should be "Use the Pivoted Report Region Plugin".
But, as far as I know, nobody has created/posted that type of APEX plugin.
You may have to use a Basic Report (not an IR) so that you can use "Function returning SELECT" for your Source.
You would need two functions:
One that dynamically generates the Column Names
One that dynamically generates the SELECT statement
These should be in a PL/SQL Package so that the later can call the former to ensure that the column data matches the column names.
i.e. -- no 'SELECT *'
MK -
Passing a large number of column values to an Oracle insert procedure
I am quite new to the ODP space, so can someone please tell me what's the best and most efficient way to pass a large number of column values to an Oracle procedure from my C# program ? Passing a small number of values as parameters seem OK but when there are many, this seems inelegant.
Passing a small number
of values as parameters seem OK but when there are
many, this seems inelegant.Is it possible that your table with a staggering amount of columns or method that collapses without so many inputs is ultimately what is inelegant?
I once did a database conversion from VAX RMS system with a "table" with 11,000 columns to a normalized schema in an Oracle database. That was inelegant.
Michael O
http://blog.crisatunity.com -
How to generate report with dynamic variable number of columns?
How to generate report with dynamic variable number of columns?
I need to generate a report with varying column names (state names) as follows:
SELECT AK, AL, AR,... FROM States ;
I get these column names from the result of another query.
In order to clarify my question, Please consider following table:
CREATE TABLE TIME_PERIODS (
PERIOD VARCHAR2 (50) PRIMARY KEY
CREATE TABLE STATE_INCOME (
NAME VARCHAR2 (2),
PERIOD VARCHAR2 (50) REFERENCES TIME_PERIODS (PERIOD) ,
INCOME NUMBER (12, 2)
I like to generate a report as follows:
AK CA DE FL ...
PERIOD1 1222.23 2423.20 232.33 345.21
PERIOD2
PERIOD3
Total 433242.23 56744.34 8872.21 2324.23 ...
The TIME_PERIODS.Period and State.Name could change dynamically.
So I can't specify the state name in Select query like
SELECT AK, AL, AR,... FROM
What is the best way to generate this report?SQL> -- test tables and test data:
SQL> CREATE TABLE states
2 (state VARCHAR2 (2))
3 /
Table created.
SQL> INSERT INTO states
2 VALUES ('AK')
3 /
1 row created.
SQL> INSERT INTO states
2 VALUES ('AL')
3 /
1 row created.
SQL> INSERT INTO states
2 VALUES ('AR')
3 /
1 row created.
SQL> INSERT INTO states
2 VALUES ('CA')
3 /
1 row created.
SQL> INSERT INTO states
2 VALUES ('DE')
3 /
1 row created.
SQL> INSERT INTO states
2 VALUES ('FL')
3 /
1 row created.
SQL> CREATE TABLE TIME_PERIODS
2 (PERIOD VARCHAR2 (50) PRIMARY KEY)
3 /
Table created.
SQL> INSERT INTO time_periods
2 VALUES ('PERIOD1')
3 /
1 row created.
SQL> INSERT INTO time_periods
2 VALUES ('PERIOD2')
3 /
1 row created.
SQL> INSERT INTO time_periods
2 VALUES ('PERIOD3')
3 /
1 row created.
SQL> INSERT INTO time_periods
2 VALUES ('PERIOD4')
3 /
1 row created.
SQL> CREATE TABLE STATE_INCOME
2 (NAME VARCHAR2 (2),
3 PERIOD VARCHAR2 (50) REFERENCES TIME_PERIODS (PERIOD),
4 INCOME NUMBER (12, 2))
5 /
Table created.
SQL> INSERT INTO state_income
2 VALUES ('AK', 'PERIOD1', 1222.23)
3 /
1 row created.
SQL> INSERT INTO state_income
2 VALUES ('CA', 'PERIOD1', 2423.20)
3 /
1 row created.
SQL> INSERT INTO state_income
2 VALUES ('DE', 'PERIOD1', 232.33)
3 /
1 row created.
SQL> INSERT INTO state_income
2 VALUES ('FL', 'PERIOD1', 345.21)
3 /
1 row created.
SQL> -- the basic query:
SQL> SELECT SUBSTR (time_periods.period, 1, 10) period,
2 SUM (DECODE (name, 'AK', income)) "AK",
3 SUM (DECODE (name, 'CA', income)) "CA",
4 SUM (DECODE (name, 'DE', income)) "DE",
5 SUM (DECODE (name, 'FL', income)) "FL"
6 FROM state_income, time_periods
7 WHERE time_periods.period = state_income.period (+)
8 AND time_periods.period IN ('PERIOD1','PERIOD2','PERIOD3')
9 GROUP BY ROLLUP (time_periods.period)
10 /
PERIOD AK CA DE FL
PERIOD1 1222.23 2423.2 232.33 345.21
PERIOD2
PERIOD3
1222.23 2423.2 232.33 345.21
SQL> -- package that dynamically executes the query
SQL> -- given variable numbers and values
SQL> -- of states and periods:
SQL> CREATE OR REPLACE PACKAGE package_name
2 AS
3 TYPE cursor_type IS REF CURSOR;
4 PROCEDURE procedure_name
5 (p_periods IN VARCHAR2,
6 p_states IN VARCHAR2,
7 cursor_name IN OUT cursor_type);
8 END package_name;
9 /
Package created.
SQL> CREATE OR REPLACE PACKAGE BODY package_name
2 AS
3 PROCEDURE procedure_name
4 (p_periods IN VARCHAR2,
5 p_states IN VARCHAR2,
6 cursor_name IN OUT cursor_type)
7 IS
8 v_periods VARCHAR2 (1000);
9 v_sql VARCHAR2 (4000);
10 v_states VARCHAR2 (1000) := p_states;
11 BEGIN
12 v_periods := REPLACE (p_periods, ',', ''',''');
13 v_sql := 'SELECT SUBSTR(time_periods.period,1,10) period';
14 WHILE LENGTH (v_states) > 1
15 LOOP
16 v_sql := v_sql
17 || ',SUM(DECODE(name,'''
18 || SUBSTR (v_states,1,2) || ''',income)) "' || SUBSTR (v_states,1,2)
19 || '"';
20 v_states := LTRIM (SUBSTR (v_states, 3), ',');
21 END LOOP;
22 v_sql := v_sql
23 || 'FROM state_income, time_periods
24 WHERE time_periods.period = state_income.period (+)
25 AND time_periods.period IN (''' || v_periods || ''')
26 GROUP BY ROLLUP (time_periods.period)';
27 OPEN cursor_name FOR v_sql;
28 END procedure_name;
29 END package_name;
30 /
Package body created.
SQL> -- sample executions from SQL:
SQL> VARIABLE g_ref REFCURSOR
SQL> EXEC package_name.procedure_name ('PERIOD1,PERIOD2,PERIOD3','AK,CA,DE,FL', :g_ref)
PL/SQL procedure successfully completed.
SQL> PRINT g_ref
PERIOD AK CA DE FL
PERIOD1 1222.23 2423.2 232.33 345.21
PERIOD2
PERIOD3
1222.23 2423.2 232.33 345.21
SQL> EXEC package_name.procedure_name ('PERIOD1,PERIOD2','AK,AL,AR', :g_ref)
PL/SQL procedure successfully completed.
SQL> PRINT g_ref
PERIOD AK AL AR
PERIOD1 1222.23
PERIOD2
1222.23
SQL> -- sample execution from PL/SQL block
SQL> -- using parameters derived from processing
SQL> -- cursors containing results of other queries:
SQL> DECLARE
2 CURSOR c_period
3 IS
4 SELECT period
5 FROM time_periods;
6 v_periods VARCHAR2 (1000);
7 v_delimiter VARCHAR2 (1) := NULL;
8 CURSOR c_states
9 IS
10 SELECT state
11 FROM states;
12 v_states VARCHAR2 (1000);
13 BEGIN
14 FOR r_period IN c_period
15 LOOP
16 v_periods := v_periods || v_delimiter || r_period.period;
17 v_delimiter := ',';
18 END LOOP;
19 v_delimiter := NULL;
20 FOR r_states IN c_states
21 LOOP
22 v_states := v_states || v_delimiter || r_states.state;
23 v_delimiter := ',';
24 END LOOP;
25 package_name.procedure_name (v_periods, v_states, :g_ref);
26 END;
27 /
PL/SQL procedure successfully completed.
SQL> PRINT g_ref
PERIOD AK AL AR CA DE FL
PERIOD1 1222.23 2423.2 232.33 345.21
PERIOD2
PERIOD3
PERIOD4
1222.23 2423.2 232.33 345.21 -
JRockit for applications with very large heaps
I am using JRockit for an application that acts an in memory database storing a large amount of memory in RAM (50GB). Out of the box we got about a 25% performance increase as compared to the hotspot JVM (great work guys). Once the server starts up almost all of the objects will be stored in the old generation and a smaller number will be stored in the nursery. The operation that we are trying to optimize on needs to visit basically every object in RAM and we want to optimize for throughput (total time to run this operation not worrying about GC pauses). Currently we are using hugePages, -XXaggressive and -XX:+UseCallProfiling. We are giving the application 50GB of ram for both the max and min. I tried adjusting the TLA size to be larger which seemed to degrade performance. I also tried a few other GC schemes including singlepar which also had negative effects (currently using the default which optimizes for throughput).
I used the JRMC to profile the operation and here were the results that I thought were interesting:
liveset 30%
heap fragmentation 2.5%
GC Pause time average 600ms
GC Pause time max 2.5 sec
It had to do 4 young generation collects which were very fast and then 2 old generation collects which were each about 2.5s (the entire operation takes 45s)
For the long old generation collects about 50% of the time was spent in mark and 50% in sweep. When you get down to the sub-level 2 1.3 seconds were spent in objects and 1.1 seconds in external compaction
Heap usage: Although 50GB is committed it is fluctuating between 32GB and 20GB of heap usage. To give you an idea of what is stored in the heap about 50% of the heap is char[] and another 20% are int[] and long[].
My question is are there any other flags that I could try that might help improve performance or is there anything I should be looking at closer in JRMC to help tune this application. Are there any specific tips for applications with large heaps? We can also assume that memory could be doubled or even tripled if that would improve performance but we noticed that larger heaps did not always improve performance.
Thanks in advance for any help you can provide.Any suggestions for using JRockit with very large heaps?
-
Need to hold very large number - please help
Hello all,
I am working with programming the SSH handshake and need to represent a very large number in order to get it to work correctly. The number is:
179769313486231590770839156793787453197860296048756011706444423684197180216158519368947833795864925541502180565485980503646440548199239100050792877003355816639229553136239076508735759914822574862575007425302077447712589550957937778424442426617334727629299387668709205606050270810842907692932019128194467627007
which is: 1.7 *10^318
and we know the largest double is:
1.7976931348623157E308
Can someone please help me with representing this number? It is a very large prime number that is used in the key exchange for SSH.
Thank you all for you time
MaxAnd who's the slowest old sod again?
This is amazing: I read a new topic, no repliesyet,
I check it again, nothing yet,
I craft my reply and presto: some quick fingersbeat
me to it again ...Grolsch makes slow :P ;)It's still only 11:45ag ... and I'm still waiting ;-)
Jos -
Drill Down in Pivot Table with item on page level
Hi,
I've got a pivot table with Chart next to it. The chart is a separate view, I did not use the 'Chart pivoted results' option.
My pivot table has one Page Level item being the Month Name.
I now have the strange behavior that drilling down on the Pivot table elements gives me No Results as my filter would be too restrictive. The problem is with the 'Month Name'. Removing this item from the filter shows the correct data. When I use the chart next to my data to do the drilling I encounter no problems. This is because this way the restriction on the month name is not added when drilling.
Has anybody encountered this behavior before and knows of a way to solve this.
Thanks for your advice,
KrisMy query looks like this :
SELECT TO_NUMBER (TO_CHAR (t1601.datum_prestatie, 'MM'), '99') AS c3,
t1584.description AS c4, t1497.project_group AS c5,
t1497.project_desc AS c6, t1497.project_code AS c7,
CONCAT (CONCAT (t2241.firstname, ' '), t2241.lastname) AS c8,
t2254.employee_number AS c9, t1601.duur AS c11,
t1601.datum_prestatie AS c12
FROM t_projects t1497 /* Dim_PROJECTS */,
t_organization t1579 /* DimX_ORGANIZATION_Project */,
t_organization t1584 /* DimX_ORGANIZATION_Project_Parent */,
t_employees t2241 /* DimA_EMPLOYEE_Prestatie */,
t_prestaties t2254 /* DimA_PRESTATIE_Project */,
t_prestaties t1601 /* Fact_PRESTATIES */
WHERE ( t1497.business_domain_code = t1579.department_code
AND t1497.project_code = t1601.project_code
AND t1497.project_code = t2254.project_code
AND t1579.parent_department = t1584.department_code
AND t2241.employee_number = t2254.employee_number
AND t1584.description = 'Specialized Technologies'
AND t1497.project_group = 'SUPP AP TM'
AND TO_NUMBER (TO_CHAR (t1601.datum_prestatie, 'MM'), '99') = 3
The last row being a check on the month number.
At the same time in OBIEE I get this :
No Results
The specified criteria didn't result in any data. This is often caused by applying filters that are too restrictive or that contain incorrect values. Please check your Request Filters and try again. The filters currently being applied are shown below.
Divisie is equal to Specialized Technologies
and Month is equal to 3
and Project Group is equal to SUPP AP TM
and Month Name is equal to Mar
The physical query doesn't even show the restriction on Month Name which makes it even more spooky...
Maybe you are looking for
-
Hi Expert, When i am trying to Save the Project in CJ20N i am getting an information message that there in "Error in commitment check". System is not allowing me to save the project. Kindly help me in resolving this issue. Thanks JS
-
I tunes doesn t lauch need quick time 7.5
my i tunes doesn t want to launch.it asked me an older version of quick time 7.5 or before but i tried and nothing happened HELP
-
DTW template for importing AR DP Invoice
Dear all, Is there any DTW template used to import AR Down payment invoices into SAP B1? Thanks Mohamed Asif
-
Ary you try to tell me that Illustrator can't have a outline stroke on editable text?
Ary you try to tell me that Illustrator can't have a outline stroke on editable text? Corel Draw have this. I don't want to change to curves my text every time. I want it to e editable. Why I can't use a outline stroke on text?
-
Why does currency type change in my template
Im using Pages in iWork-09 and though I specify a currency through the inspector (EUR GBP USD or Skr) before saving the document as a template - it still shows up as Skr when I open the template. I keep having to save the templates as .pages instead