Duplicating rows in SSIS
Hi
I am new to SSIS and am trying to duplicate rows in a table according to the qty given. Can this be done on a 'Data Flow Task', instead of control flow (For loop)? As I am trying to modify an existing package, I have realised that the best place
would be in a data flow task, however, I would like to know instead of writing my own script code, is there an easier way of doing this?
Example:
Contract# Qty
===========
A 2
B 3
Result
Contract# Qty
=============
A 1
A 1
B 1
B 1
B 1
Any assistance would be greatly appreciated.
Thanks
Yes
It can be done in SSIS Data Flow task
you can connect your data source to an OLE DB Command
inside the OLE DB Command you can execute an stored procedure
So that means you execute the stored procedure for each combination of Contract# and Qty.
and inside the stored procedure you can write t-sql code (with while) to insert records into destination table as many as Qty is.
so that means when you execute the stored procedure with OLE DB command for each row (for example for Contract#:A, and Qty:2), then the stored procedure will create records in destination table (for example two records with Contract#: A, and Qty: 1)
Regards,
Reza
SQL Server MVP
Blog:
http://rad.pasfu.com Twitter:
LinkedIn:
SQL Server Integration Services 2012 Tutorial Videos:
http://www.radacad.com/CoursePlan.aspx?course=1
Similar Messages
-
Custom Sort of Duplicated row walues in Webi based on SAP BW Universe
It is a duplicate thread i have already posted in Integration kits - SAP section.
This is regarding custom sorting of row value which are duplicated in Webi which is based on SAP BW Universe.
I have created Webi Report based on Characteristic structure. Using Alerter I provide some space for child node to notify it as child node below a parent node.
Please see a sample table below:
AAAA
onpeak
midpeak
offpeak
BBBB
onpeak
midpeak
offpeak.
If I enable Avoid duplicate row aggregation, i will be able to see duplicate rows for "onpeak" etc. Since Webi does a default sorting, I have to make a custom sort to replicate the structure followed. And in custom sort i dont see duplicated rows, and i see only one "onpeak". Accordingly if I try to sort using custom sort, then the result will be
AAAA
onpeak
onpeak
midpeak
midpeak
offpeak
offpeak
BBBB.
Please let me know if anyone has any solution for this.
I have already created report with modifying the row names in backend by adding letter before its value for example "1offpeak" for the duplicated row and then removing the letter "1" in webi using substring. But i know this is the crude way of doing it.I have changed the naming convention in BI structure by introducing a white space before the name to avoid duplication and used it in report. Not a good solution anyway.
-
How to Remove Duplicated Rows in a Report?
My report is diplaying several rows of the same data in each field. How do I delete the duplicated rows?
For example:
Name Salary Date of Hire
Emp1 10 Today
Emp1 10 Today
Emp1 10 Today
Emp2 20 Yesterday
Emp2 20 Yesterday
(and do on)Okay. First don't let it frustrate you too much. If you like learning new things, Crystal will keep you happy for the rest of your life.
Now, if you link table A to table B & table A to table C
and there are multiple matching records in tables B & C for each record in table A (called a one-to-many relationship )
then you get a Cartesian effect yielding duplicated data
---which is to say that for each record in table B, it will be repeated for as many times as you have matching records in table C and/or vise-versa.
You can have as many one-to-one, or one-to-none (with an outer join) relationship links as you like in a report,
but you can only have one one-to-many link in a report without getting duplicated data.
You may have to start with test reports with only a couple of tables until you get a feel for the data.
There are different ways to get around this issue. If you are not summing any numbers, then you might be able to use grouping and formating to hide duplicated data. or you may try subreports (small reports placed in a container report).
Edited by: DebiHerbert on Jul 16, 2010 9:03 PM -
How to avoid duplicated rows using the outer join
Hi everybody,
I have the following query:
select a.usr_login, b.ugp_rolename, b.ugp_display_name from
(select usr.usr_login, usr.usr_key, usg.ugp_key from usr,usg
where usg.usr_key = usr.usr_key
and usr.usr_login IN ('C01015','C01208')) a,
(select ugp.ugp_key, ugp.ugp_display_name from ugp
where ugp.ugp_rolename LIKE 'B-%') b
where a.ugp_key = b.ugp_key (+)
The first query 'a' has the following result:
usr_login <space> usr_key <space> ugp_key
C01015 <space> 49 <space> 565
C01015 <space> 49 <space> 683
C01015 <space> 49 <space> 685
C01015 <space> 49 <space> 3
C01208 <space> 257 <space> 3
The usr_login on table usr is the primary key, and as you can see above, for each usr_login I can find one ore more ugp_key on the table usg.
The query 'b' gives the list of all the usr_login's roles which have the name LIKE 'B-%' (it means '*Business Roles*'), and all the respective role's key (ugp_key)
So, when I join the query 'a' with the query 'b', I expect to find for every usr_login the respective ugp_display_name (the Business Role name).
Because the query 'b' contains ONLY the ugp_keys of the Business Roles, when I execute the complete query, this is the result:
usr_login <space> ugp_rolename <space> ugp_display_name
C01015 <space> BK005 <space> TELLER 1
C01015 <space> BK003 <space> TELLER 2
C01015 <space> null <space> null
C01015 <space> null <space> null
C01208 <space> null <space> null
As you can see, with the outer join I obtain the Business Name (ugp_display_name) for each occurrence (and I have 2 rows duplicated with 'null' for the usr_login C01015); This beacuse the query 'b' doesn't have the two ugp_keys 685 and 3.
Instead I'd like to have the following result:
usr_login <space> ugp_rolename <space> ugp_display_name
C01015 <space> BK005 <space> TELLER 1
C01015 <space> BK003 <space> TELLER 2
C01208 <space> null <space> null
deleting ONLY the duplicated rows with null, when the usr_login has already at least one ugp_display_name not null.
For example:
1) The usr_login 'C01015' has 2 Business Roles (with ugp_key = 565 and 683) and other 2 not-Business Roles (with ugp_key = 685 and 3) --> I want to see ONLY the 2 records related to the Business Roles
2) The usr_login 'C01208' has only one not-Business Roles (with ugp_key = 3) --> I want to see the record related to the not- Business Role
Practically:
1) When a usr_login has one or more Business Roles and other not-Business Roles , I'd like to see ONLY the records about the Business Roles (not the records with 'null','null')
2) When a usr_login doesn't have Business Roles, I'd like to see the records about the same usr_login with 'null','null'
This, because I need to show both usr_logins: with and without Business Roles.
Anybody has any suggestions ? Any help will be appreciated.
Thanks in advance for any help !!
AlexHi, Alex,
So you want to display rows from a where either
(1) the row has a match in b, or
(2) no iwith the same usr_login has a match.
Here's one way to do that:
WITH a AS
SELECT usr.usr_login, usr.usr_key, usg.ugp_key
FROM usr
, usg
WHERE usg.usr_key = usr.usr_key
AND usr.usr_login IN ('C01015','C01208')
, b AS
SELECT ugp.ugp_key, ugp.ugp_display_name
FROM ugp
WHERE ugp.ugp_rolename LIKE 'B-%'
, got_match_cnt AS
SELECT a.usr_login, b.ugp_rolename, b.ugp_display_name
, b.ugp_key
, COUNT (b.ugp_key) OVER (PARTITION BY a.usr_login) AS match_cnt
FROM a
, b
WHERE a.ugp_key = b.ugp_key (+)
SELECT usr_login, ugp_rolename, ugp_display_name
FROM got_match_cnt
WHERE ugp_key IS NOT NULL -- Condition (1)
OR match_cnt = 0 -- Condition (2)
;If b.ugp_rolename or b.ugp_display_name can not be NULL, then you could use that just as well as b.ugp_key for testing condition (1).
By the way, you don't need sub-queries for a and b; you can do all the joins and all the filtering (except conditions (1) and (2)) in one query, but the sub-queries aren't hurting anything. If you find the separate sub-queries easier to understand, debug and maintain, then, by all means, keep them.
I hope this answers your question.
If not, post a little sample data (CREATE TABLE and INSERT statements, relevant columns only) for all tables, and also post the results you want from that data.
Explain, using specific examples, how you get those results from that data.
Always say which version of Oracle you're using. -
Duplicating Row Information Macro
Hello,
I am having some trouble making this button or macro for this pdf file.
I currently have a row of input cells that it would like to duplicate with the click of a button.
The reason that I do not duplicate these rows to begin with is to minimize the space of the file and allow users to create their own request form with the number of parts they are requesting.
I am currently running Adobe LiveCycle Designer ES 8.2.1
Thanks
Re: Duplicating Row Information MacroI was able to figure it out, your instructions were very clear and it helped to have the example you sent to have something tangible as a learning guide.
Thank you so much for your help. -
How to create .ctl (control file) to calide excel rows in ssis ?
In my package i have a requirement to use .ctl control file to valide excel rows. can anyone tell me how to create a .ctl file which will have all of this information.
A few assumptions 1st:
1) I understood the .ctl file can be any ASCII (flat) file of arbitrary format; and that
2) You will drive the validation rules.
Since you seem like want to apply the validation as the first step in your package I advocate plugging the Script Transformation task in which happens to expose the "ProcessInputRow" method that in turn allows a developer to intercept each row for inspection.
This is where you will need to apply much thinking to how to make the validation rules applied in retrospect to the incoming data.
You drive the code (logic). More through help is a click away here http://www.codeproject.com/Articles/193855/An-indespensible-SSIS-transformation-component-Scr which only covers how to make the row-by-row processing possible.
If you expected SSIS to have this functionality provided for free - unfortunately this is not happening.
One of many reason is, what you want to do is extremely laborious.
Arthur My Blog -
Lookup transformation to avoid duplicate rows? - SSIS 2005
Hi,
I'm maintaning a SSIS 2005 pkg. I need to read a flat file to write on a SQL Server table, avoiding duplicates.
I can have duplicates rows into the flat file to import and I need to prevent the insert of any rows already existing in the SQL Server table.
So, I think to use a lookup transformation. I've created a flat file source, then I connect it to a lookup transformation and inside it I've specified as the reference table the SQL Server destination table. Then, I've checked the available lookup columns
each adding as a new column: but the lookup task has arised an error and so I've specified as lookup operation the replacement. For each unmatching I need to write on the SQL Server table (the reference table in the lookup). For the lookup output error I've
indicate to ignore failure. Other steps?
However, when I run the pkg then inside the SQL Server destination table I can see only NULL values, but I want to see the rows don't already present in the table.
Any suggests to me, please? ThanksHi,
I'm using SSIS 2005 as reported in the title of the post.
I could have duplicates inside the source file and the existing table could haven't any rows.
Thanks
If you dont have any rows in existing table, then they will go through Error output in lookup task. For duplicates, lookup task will find matches and will go through lookup match output
Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs -
Transporting columns into rows using SSIS
Hi everyone,
Just need some help.
I wanna transpose a data table in SSIS.
The table is originally
CUSTREFNO
Citizen
FORENAME
Citizen
SURNAME
Citizen
DATE-OF-BIRTH
Citizen
TEL_NO
Citizen
GUARDIANNAME
Citizen
ADDRESS1
Citizen
ADDRESS2
Citizen
ADDRESS3
Citizen
ADDRESS4
Citizen
POSTTOWN
Citizen
POSTCODE
Citizen
OPERATOR
Junk
BOARDINGPOINT
Junk
DESTINATION
Junk
SERVICE_NO
Junk
EMAIL
Citizen
And i need the table in the following format
CUSTREFNO
FORENAME
SURNAME
DATE-OF-BIRTH
TEL_NO
GUARDIANNAME
ADDRESS1
ADDRESS2
ADDRESS3
ADDRESS4
POSTTOWN
POSTCODE
OPERATOR
BOARDINGPOINT
DESTINATION
SERVICE_NO
EMAIL
Citizen
Citizen
Citizen
Citizen
Citizen
Citizen
Citizen
Citizen
Citizen
Citizen
Citizen
Citizen
Junk
Junk
Junk
Junk
Citizen
Any help, would be very much appreciated, thanksUmar, please see these resources.
http://blogs.msdn.com/b/philoj/archive/2007/11/10/transposing-rows-and-columns-in-sql-server-integration-services.aspx
http://www.rad.pasfu.com/index.php?/archives/14-PIVOT-Transformation-SSIS-Complete-Tutorial.html
http://dinesql.blogspot.in/2011/08/pivot-and-unpivot-integration-services.html
http://sqlage.blogspot.in/2013/12/ssis-how-to-use-unpivot-in-ssis.html
http://www.bimonkey.com/2009/06/the-pivot-transformation/
http://www.sql-server-performance.com/2007/ssis-pivot-unpivot/
Knowledge is the only thing that I can give you, and still retain, and we are both better off for it. -
Does pivotTable support rows with duplicated row names?
I am developing an application using JDeveloper 11.1.1.6.0.
Pivot table can display attribute values as row or column names. If the "row name" attribute for two records has the same/duplicated values, does pivot table show the two records in two rows with the same row name?
My experiment shows only one row with blank value for "measure" in such case. Just want to confirm this is the expected behavior.Hi,
don't quite understand the use case so answering in general: no data should be lost when using Pivot table
Frank -
Database trigger to insert duplicated rows on audit table
Hi
It is possible to insert duplicate rows (at the moment database generate PK violation constraint for one specific table) within an audit table ?
Certain code like this is not working, always the whole transaction makes a rollback and audit table will be empty:
CREATE OR REPLACE TRIGGER USER.audit_TABLE_TRG
before INSERT ON USER.TABLE
REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
BEGIN
declare
V_conteo number(1) := 0;
duplicate_record EXCEPTION;
begin
select count(*)
into V_conteo
from USER.TABLE
where <PK conditions>
if V_conteo > 0 then
begin
INSERT INTO USER.AUDIT_TABLE
(<>)
VALUES
(<>);
raise duplicate_record;
exception
when duplicate_record then
INSERT INTO USER.AUDIT_TABLE
(<>)
VALUES
(<>);
raise_application_error(-20019,'Duplicated column1/column2:'||:NEW.column1||'/'||:NEW.column2);
when others then
dbms_output.put_line('Error ...'||sqlerrm);
end;
end if;
end;
END;
/>
Exactly this is my problem , one only transaction (insert into audit table and try to insert into target table), the reason of this post is to know whether exists another way to insert all the intent duplicate records on target table into one audit table, you right ,maybe I can add one date column and modify the PK on audit table but my main problem still happens.
>
Can I ask you why you want to go trigger route for this if your intention is only to capture the duplicate records? Assuming that you are on at least on 10gR2, you can look at DML error table. If you go this route, there is no need for additional overhead of trigger, code failures, etc.
Oracle can automatically store the failed records in an error table which you could later on investigate and fix it or ignore it.
Simple example:
SQL> create table emp (empno number primary key, ename varchar2(10), sal number);
Table created.
SQL> insert into emp values(1010, 'Venkat', 100);
1 row created.
SQL> commit;
Commit complete.
Create error table to log the failed records
BEGIN
DBMS_ERRLOG.create_error_log (dml_table_name => 'emp');
END;
Now let's insert a duplicate record
SQL> insert into emp values(1010, 'John', 200) ;
insert into emp values(1010, 'John', 200)
ERROR at line 1:
ORA-00001: unique constraint (VENKATB.SYS_C002813299) violated
Now use the log table to capture
SQL> insert into emp values(1010, 'John', 200) LOG ERRORS INTO err$_emp ('INSERT') REJECT LIMIT UNLIMITED;
0 rows created.
Now check the error log table and do whatever you want
SQL> r
1* select ORA_ERR_MESG$, empno, ename, sal from err$_EMP
ORA_ERR_MESG$
EMPNO
ENAME
SAL
ORA-00001: unique constraint (VENKATB.SYS_C00
2813467) violated
1010
John
200
1 row selected.This will also capture when you do
INSERT INTO EMP SELECT * FROM EMP_STAGE LOG ERRORS INTO err$_emp ('INSERT') REJECT LIMIT UNLIMITED;
You can capture the whole record into the log table. All columns.
Regards
Edited : Code -
Date tool calculation dimension is causing duplicated rows in drill through
I am implementing a date tool as per:
http://www.sqlbi.com/articles/datetool-dimension-an-alternative-time-intelligence-implementation
I have my date tool dimension shell with 10 rows, a row for each calculation. I've created the scope statements in the cube. The date tool is working with the calculations I've created.
I've found a problem with drill through in Excel 2013 where it returns duplicate rows. It looks like its a cross join of rows that should be returned by the number of rows (calculations) in my date tool dimension. If I remove the date tool dimension from
the cube I have the correct number of rows so I know it's definitely the cause. I commented out the scope statements to see if that was the problem and it isn't. There must be something else on the configuration that I've missed but I can't spot it. Any ideas?
Thanks Brian
Brian SearleSeems like this may be the issue
see
http://javierguillen.wordpress.com/2012/03/05/userelationship-and-direction-of-context-propagation/
Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs -
I have a dimension time and a table in which
there are some rows with the same date:
i.e.
DATA OP DESCRIPTION
09-02-2002 64.54 xxxx
09-02-2002 24.63 yyy
But on my workbook I can't see these rows.
I see empty fiels.
The sqlinspector gives me the right query.
Best,
AleHi,
don't quite understand the use case so answering in general: no data should be lost when using Pivot table
Frank -
Outer Join creating duplicated rows (sort of)
Greetings Forum,
Using version MII 11.5 sp 6
Joining two unique xml documents on three common columns using the Join Action block set up for an outer join. The results of the join are
Set a Set b results in
c1 c2 c1 c3 c1 c2 c3
a a1 a a2 a a1 a2
b b1 b b2 a a1 --- this row is extra
c c1 b b1 b2
d d1 b b1 --- this row is extra
c c1 ---
d d1 ---
in this example the two extra rows match set a and not in set b even though they did have matching data.
Any Ideas. We have tryed deleting the join action and recreating it. Doing the join with two local xml documents....Actually, It's three columns matched up to three columns. Attached is an actual sample right out of BLS.
[INFO SET 1 ]: <?xml version="1.0" encoding="UTF-8"?><Rowsets DateCreated="2008-08-12T15:50:16" EndDate="2008-08-12T16:27:22" StartDate="2008-08-12T16:27:22" Version="11.5.3">
<Rowset>
<Columns>
<Column Description="a" MaxRange="1" MinRange="0" Name="a" SQLDataType="1" SourceColumn="a"/>
<Column Description="b" MaxRange="1" MinRange="0" Name="b" SQLDataType="1" SourceColumn="b"/>
<Column Description="c" MaxRange="1" MinRange="0" Name="c" SQLDataType="1" SourceColumn="c"/>
<Column Description="d" MaxRange="1" MinRange="0" Name="d" SQLDataType="8" SourceColumn="d"/>
</Columns>
<Row>
<a>a1</a><b>b1</b><c>c1</c><d>d1</d>
</Row>
<Row>
<a>a2</a><b>b2</b><c>c2</c><d>d2</d>
</Row>
<Row>
<a>a3</a><b>b3</b><c>c3</c><d>d3</d>
</Row>
<Row>
<a>a4</a><b>b4</b><c>c4</c><d>d4</d>
</Row>
</Rowset>
</Rowsets>
[INFO SET 2]: <?xml version="1.0" encoding="UTF-8"?><Rowsets DateCreated="2008-08-12T15:50:16" EndDate="2008-08-12T16:27:22" StartDate="2008-08-12T16:27:22" Version="11.5.3">
<Rowset>
<Columns>
<Column Description="a" MaxRange="1" MinRange="0" Name="a" SQLDataType="1" SourceColumn="a"/>
<Column Description="b" MaxRange="1" MinRange="0" Name="b" SQLDataType="1" SourceColumn="b"/>
<Column Description="c" MaxRange="1" MinRange="0" Name="c" SQLDataType="1" SourceColumn="c"/>
<Column Description="e" MaxRange="1" MinRange="0" Name="e" SQLDataType="1" SourceColumn="e"/>
</Columns>
<Row>
<a>a1</a><b>b1</b><c>c1</c><e>e1</e>
</Row>
<Row>
<a>a2</a><b>b2</b><c>c2</c><e>e2</e>
</Row>
</Rowset>
</Rowsets>
[INFO RESEULTs]: <?xml version="1.0" encoding="UTF-8"?><Rowsets DateCreated="2008-08-12T15:50:16" EndDate="2008-08-12T16:27:22" StartDate="2008-08-12T16:27:22" Version="10.0">
<Rowset>
<Columns>
<Column Description="a" MaxRange="1" MinRange="0" Name="a" SQLDataType="1" SourceColumn="a"/>
<Column Description="b" MaxRange="1" MinRange="0" Name="b" SQLDataType="1" SourceColumn="b"/>
<Column Description="c" MaxRange="1" MinRange="0" Name="c" SQLDataType="1" SourceColumn="c"/>
<Column Description="d" MaxRange="1" MinRange="0" Name="d" SQLDataType="8" SourceColumn="d"/>
<Column Description="e" MaxRange="1" MinRange="0" Name="e" SQLDataType="1" SourceColumn="e"/>
</Columns>
<Row>
<a>a1</a><b>b1</b><c>c1</c><d>d1</d><e>e1</e>
</Row>
<Row>
<a>a1</a><b>b1</b><c>c1</c><d>d1</d><e>---</e>
</Row>
<Row>
<a>a2</a><b>b2</b><c>c2</c><d>d2</d><e>e2</e>
</Row>
<Row>
<a>a2</a><b>b2</b><c>c2</c><d>d2</d><e>---</e>
</Row>
<Row>
<a>a3</a><b>b3</b><c>c3</c><d>d3</d><e>---</e>
</Row>
<Row>
<a>a4</a><b>b4</b><c>c4</c><d>d4</d><e>---</e>
</Row>
</Rowset>
</Rowsets> -
Duplicating rows + analytics functions
does any body know how to duplicate data in a table without using union.
for example, i have a table called products and i want to create a view that for each row in the table it creates two rows that are identical but with an additional column that is hardcoded. that column will have FIRST for first row and SECOND for second row.
so far i have something like this
select 'FIRST', name, lastname, producttype, prodcode from table products where ....
union all
select 'SECOND', name, lastname, producttype, prodcode from table products where ...
this will create two rows per every row in the table products. is there a way to do the same thing using analytics functions so i dont have to query the same table twice. quering the table twice might hurt performance.
thanksHi,
Try this code.
select decode(lvl,1,'FIRST','SECOND'), name, lastname, producttype, prodcode
from table , ( select 1 lvl from dual union all select 2 lvl from dual )
products where ... Example
SQL> select decode(lvl,1,'FIRST','SECOND'), empno,ename,sal,comm,deptno
2 from emp_test,( select 1 lvl from dual union all select 2 lvl from dual );
DECODE EMPNO ENAME SAL COMM DEPTNO
FIRST 7369 SMITH 800 20
FIRST 7499 ALLEN 1600 300 30
FIRST 7521 WARD 1250 500 30
FIRST 7566 JONES 2975 20
FIRST 7654 MARTIN 1250 1400 30
FIRST 7698 BLAKE 2850 30
FIRST 7782 CLARK 2450 10
FIRST 7788 SCOTT 3000 20
FIRST 7839 KING 5000 0 10
FIRST 7844 TURNER 1500 30
FIRST 7876 ADAMS 1100 20
FIRST 7900 JAMES 950 30
FIRST 7902 FORD 3000 20
FIRST 7934 MILLER 1300 10
SECOND 7369 SMITH 800 20
SECOND 7499 ALLEN 1600 300 30
SECOND 7521 WARD 1250 500 30
SECOND 7566 JONES 2975 20
SECOND 7654 MARTIN 1250 1400 30
SECOND 7698 BLAKE 2850 30
SECOND 7782 CLARK 2450 10
SECOND 7788 SCOTT 3000 20
SECOND 7839 KING 5000 0 10
SECOND 7844 TURNER 1500 30
SECOND 7876 ADAMS 1100 20
SECOND 7900 JAMES 950 30
SECOND 7902 FORD 3000 20
SECOND 7934 MILLER 1300 10
28 ligne(s) sélectionnée(s).
SQL> Regards Salim.
Edited by: Salim Chelabi on 2010-03-23 14:56
Edited by: Salim Chelabi on 2010-03-23 14:58 -
Tabular - Parent/Child hierarchy - hide duplicated rows problem
Hi All, in all the Tabular parent-child hierarchy examples I could find a measure is created to calculate current node depth. Later that measure [CurrentNodeDepth] is compared with aggregated node depth using MAX/MIN functions. The current node depth measure
uses ISFILTERED DAX functions like
CurrentNodeDepth := IF (ISFILTERED ('Hierarchy'[Level3]), 3,IF (ISFILTERED ('Hierarchy'[Level2]), 2, IF (ISFILTERED ('Hierarchy'[Level1]), 1)))
All works ok until you start filtering the hierarchy - that might happen with very big hierarchies - showing only part of hierarchy to speed up calculations. Once you set a filter there, [CurrentNodeDepth] measure gives wrong output and whole parent-child hierarchy
doesn't work as it should.Can you provide more info on this, how have you modeled the hierarchy with the fact table, are you rolling up the measures for the child to the parent level (modeling as measures) or each employee is showing its own revenue only (modeled as attribute)? If it is modeled as measures where you join the employee table to the closure table and then that to fact table then are you applying any filters in your report ? If you are applying filters in your report and somehow only parent is filtered (and not its children) then you won't be able to drill down, though you will see the + sign against it.
Maybe you are looking for
-
BCS - Virtual cube with services
I have basicube CUBE1 assigned to virtual cube VCUBE1 in BCS. There is an entry in table RSSEM_UCR0_IPD reflecting this assignment. I want to assign a 2nd virtual cube VCUBE2 (exact copy of VCUBE1) to CUBE1 so that I can test an updated function modu
-
Hi, We're facing some critical problems with our new MacPros, CC, and the Firepro D700's. It appears that after we upgraded to 10.9.3 the D700's no longer work with Creative Cloud. Our Premiere renders are full of artifacts, lines, and drop out.
-
I opened my iphone 5c and got a few problems
i opened it and when i did i pulled to hard and the cables attached to the screen plugged into the rest of the phone ripped and now the screen doesnt show anything and touch doesnt work. though i found out how to fix this and am getting a new screen
-
Forcing processes / JVM to run on specific CPU / Cores seamlessly
I'm running 4 different performance sensitive applications on a 2 CPU / quad-core 3GHz HP with solaris x86. 3 applications are java, 1 is C. I'd like to force each of them to run on a specific (and different each time) CPU or core. I've read about pr
-
Ship to locations while Sales delivery
Hi, How to find ship to locations of a customer? Suppose I have done delivery to on customer, now if customer have different ship to locations, then how could ifind out that ship to locations? Please advice. Thanks,