Data saggrigation into different columns
Hi,
I need help on following:
i have data in the table as
col1 col2
3 343
3 567
3 333
3 987
3 987
there are other column too. I have to populate this like:
col1 c1
3 (1) ---- 35399
3 (2) ---- 46388
3 (3) ---- 37377
Let me explain this:
col1 defines how many columns col2 will have. There will be definite 10 rows (per record set).
the out put will have 3(1) are records of row1 in col2... like wise.
Col2 can have maximum of 26 columns. This means result set will have max 26 rows.
Please let me know if there is confusion in requirement.
Thanks
Edited by: user2544469 on Feb 9, 2011 4:38 AM
user2544469 wrote:
Hi,
I need help on following:
i have data in the table as
col1 col2
3 343
3 567
3 333
3 987
3 987
there are other column too. I have to populate this like:
col1 c1
3 (1) ---- 35399
3 (2) ---- 46388
3 (3) ---- 37377
Let me explain this:
col1 defines how many columns col2 will have. columns do not contain columns. Columns do not contain "fields". I think you need to go back and revisit the basic rules of data normalization, and get this data model to third normal form.
There will be definite 10 rows (per record set).
the out put will have 3(1) are records of row1 in col2... like wise.
Col2 can have maximum of 26 columns. This means result set will have max 26 rows.
Please let me know if there is confusion in requirement.
Thanks
Edited by: user2544469 on Feb 9, 2011 4:38 AM
Similar Messages
-
How to display rows of data into different columns?
I'm new to SQL and currently this is what I'm trying to do:
Display multiple rows of data into different columns within the same row
I have a table like this:
CREATE TABLE TRIPLEG(
T# NUMBER(10) NOT NULL,
LEG# NUMBER(2) NOT NULL,
DEPARTURE VARCHAR(30) NOT NULL,
DESTINATION VARCHAR(30) NOT NULL,
CONSTRAINT TRIPLEG_PKEY PRIMARY KEY (T#, LEG#),
CONSTRAINT TRIPLEG_UNIQUE UNIQUE(T#, DEPARTURE, DESTINATION),
CONSTRAINT TRIPLEG_FKEY1 FOREIGN KEY (T#) REFERENCES TRIP(T#) );
INSERT INTO TRIPLEG VALUES( 1, 1, 'Sydney', 'Melbourne');
INSERT INTO TRIPLEG VALUES( 1, 2, 'Melbourne', 'Adelaide');
The result should be something like this:
> T# | ORIGIN | DESTINATION1 | DESTINATION2
> 1 | SYDNEY | MELBORUNE | ADELAIDE
The query should include the `COUNT(T#) < 3` since I only need to display the records less than 3. How can I achieve the results that I want using relational views???
Thanks!!!T#
LEG#
DEPARTURE
DESTINATION
1
1
Sydney
Melbourne
1
2
Melbourne
Adelaide
1
3
Adelaide
India
1
4
India
Dubai
2
1
India
UAE
2
2
UAE
Germany
2
3
Germany
USA
On 11gr2, you may use this :
SELECT t#,
REGEXP_REPLACE (
LISTAGG (departure || '->' || destination, ' ')
WITHIN GROUP (ORDER BY t#, leg#),
'([^ ]+) \1+',
'\1')
FROM tripleg
where leg#<=3
GROUP BY t#;
Output:
1 Sydney->Melbourne->Adelaide->India
2 India->UAE->Germany->USA
Cheers,
Manik. -
Breaking the string into different columns
Hi Guys,
I need to break the following string into different columns
'XXXXX.0001.09011.0001.00002.03.0004.0005.0006.00007.'
I am trying to write it using instr and substr , but having some issues .
Is there any other way to do this. If not can someone help me , below is the query that i am working on
SELECT SUBSTR ('XXXXXX.0001.09011.0001.00002.03.0004.0005.0006.00007', 1, INSTR ('XXXXXX.0001.09011.0001.00002.03.0004.0005.0006.00007', '.', 1) - 1) col1,
SUBSTR ('XXXXXX.0001.09011.0001.00002.03.0004.0005.0006.00007',
INSTR ('XXXXXX.0001.09011.0001.00002.03.0004.0005.0006.00007', '.', 1) + 1,
INSTR ('XXXXXX.0001.09011.0001.00002.03.0004.0005.0006.00007', '.', 1, 2)
- INSTR ('XXXXXX.0001.09011.0001.00002.03.0004.0005.0006.00007', '.', 1)
- 1
) col2,
SUBSTR ('XXXXXX.0001.09011.0001.00002.03.0004.0005.0006.00007',
INSTR ('XXXXXX.0001.09011.0001.00002.03.0004.0005.0006.00007', '.', -1, 2) + 1,
INSTR ('XXXXXX.0001.09011.0001.00002.03.0004.0005.0006.00007', '.', -1, 1)
- INSTR ('XXXXXX.0001.09011.0001.00002.03.0004.0005.0006.00007', '.', -1, 2)
- 1
) col3
from dual
It is very urgent.
Thanks in advance.npejavar wrote:
It is very urgent.
It doesn't look urgent, you could simply read the manuals for instr and substr or describe any issues or errors you are having, or post sample data so people could help you more easily, or format your code so it is more readable, but you don't bother to do any of those things so if it isn't important to you to extend any effort, why would it be important to us?
If it was really urgent it would be a violation of the conditions of use of these forums.
http://www.catb.org/esr/faqs/smart-questions.html#urgent
http://www.oracle.com/html/terms.html
>
4. Use of Community Services
Community Services are provided as a convenience to users and Oracle is not obligated to provide any technical support for, or participate in, Community Services. While Community Services may include information regarding Oracle products and services, including information from Oracle employees, they are not an official customer support channel for Oracle.
You may use Community Services subject to the following: (a) Community Services may be used solely for your personal, informational, noncommercial purposes; (b) Content provided on or through Community Services may not be redistributed; and (c) personal data about other users may not be stored or collected except where expressly authorized by Oracle -
Splitting one column into different columns.
Hello Experts,
How do i split datetime column into different columns while doing a Select statement.
Ex:
The column "REC_CRT_TS" has data like "2014-05-08 08:23:09.0000000".The datatype of this column is "DateTime". And i want it in SELECT statement like;
SELECT
YEAR(DATETIME) YEAR,
MONTH(DATETIME) MONTH,
DATENAME(DATETIME) MONTHNAME,
DATEPART(DATETIME) WEEKNUM,
DAY(DATETIME) DATE,
DATEPART(DATETIME) HOUR
FROM TABLE_NAME;
The output should look like this;
--YEAR| MONTH | MONTHNAME| WEEKNUM | DATE | HOUR
--2014| 5 | May | 25 | 08 |08
Any suggestions please.
Thanks!
RahmanI made a very quick research and I see in this blog post
http://www.jamesserra.com/archive/2011/08/microsoft-sql-server-parallel-data-warehouse-pdw-explained/
that It also uses its own query engine and not all features of SQL
Server are supported. So, you might not be able to use all your DBA tricks. And you wouldn’t want to build a solution against SQL Server and then just hope to upsize it to Parallel Data Warehouse Edition.
So, it is quite possible that this function doesn't exist in PDW version of SQL
Server. In this case you may want to implement case based month name or do it in the client application.
For every expert, there is an equal and opposite expert. - Becker's Law
My blog
My TechNet articles -
Parse string into different column and optimization
We are in process of building an audit process for any changes that occur automatically or manually by the user on some of the table data. To do this we have two options:
1. Have master table to store the audit event summary and a detail table to store each column change with old and new values. Something like:
CREATE TABLE TEST_ADT_DTL
EVNT_ID NUMBER,
COL_NAME VARCHAR2(1000),
OLD_COL_VAL VARCHAR2(1000),
NEW_COL_VAL VARCHAR2(1000)
);but this approach has some processing overhead since for the changes to each record there will be multiple records based on number of columns updated. If we are loading 40K transaction twice a month, and the changes are almost 30-40% so the detail table will grow considerably.
2. To have the detail table with one column that will have a concatenated string of changes with field name, old and new values.
CREATE TABLE TEST_ADT_EVNT
EVNT_ID NUMBER,
TBL_NAME VARCHAR2(100),
OPER_CD VARCHAR2(1),
USR_ID VARCHAR2(10),
ACT_DT DATE,
PK_STRNG_VAL VARCHAR2(100),
CMNT_TXT VARCHAR2(1000)
CREATE TABLE TEST_ADT_DTL
EVNT_ID NUMBER,
ADT_LOG VARCHAR2(1000)
INSERT INTO TEST_ADT_EVNT VALUES (1, 'CUSTOMER', 'A', 'ABC', SYSDATE, 'CUS0001', 'SOME COMMENT');
INSERT INTO TEST_ADT_EVNT VALUES (2, 'CUSTOMER', 'U', 'ABC', SYSDATE, 'CUS0001', 'SOME COMMENT');
INSERT INTO TEST_ADT_EVNT VALUES (3, 'ORDER', 'A', 'XYZ', SYSDATE, 'CUS0002', 'SOME COMMENT');
INSERT INTO TEST_ADT_EVNT VALUES (4, 'ORDER', 'U', 'EFG', SYSDATE, 'CUS0002', 'SOME COMMENT');
INSERT INTO TEST_ADT_EVNT VALUES (5, 'ORDER', 'U', 'XYZ', SYSDATE, 'CUS0002', 'SOME COMMENT');
INSERT INTO TEST_ADT_DTL VALUES (2, 'FIELD:CITY,OLD:AVENEL,NEW:EDISON;FIELD:ZIP,OLD:07001,NEW:07056;');
INSERT INTO TEST_ADT_DTL VALUES (4, 'FIELD:ADDRESS,OLD:234 ROGER ST,NEW:124 WEST FIELD AVE;FIELD:STATE,OLD:NJ,NEW:NY;FIELD:PHONE,OLD:,NEW:2012230912;');
INSERT INTO TEST_ADT_DTL VALUES (5, 'FIELD:MID_NAME,OLD:,NEW:JASON;FIELD:ADDRESS,OLD:,NEW:3 COURT CT;');
COMMIT;I want to know if we want to generate a report for audit log, how can I display the data from detail table in columns. I mean how to parse the ADT_LOG column to show the data in three different columns like:
FIELD OLD NEW
CITY AVENEL EDISON
ZIP 07001 07056
.along with the columns from EVNT table.
And, want to know which approach would be better.hey I think I finally got it using the model clause.
not sure if this will be faster or not.
you can increase the number of iterations if you are not hitting them all,
( the lower your iteration number the faster this will run)
select adt_log, field, old, new from
with TEST_ADT_DTL as
(select 2 evnt_id, 'FIELD:CITY,OLD:AVENEL,NEW:EDISON;FIELD:ZIP,OLD:07001,NEW:07056;' ADT_LOG FROM DUAL UNION
select 4, 'FIELD:ADDRESS,OLD:234 ROGER ST,NEW:124 WEST FIELD AVE;FIELD:STATE,OLD:NJ,NEW:NY;FIELD:PHONE,OLD:,NEW:2012230912;' from dual union
select 5, 'FIELD:MID_NAME,OLD:,NEW:JASON;FIELD:ADDRESS,OLD:,NEW:3 COURT CT;' from dual
select evnt_id, adt_log, field, old, new from test_adt_dtl
model return updated rows
partition by (evnt_id)
dimension by ( 0 d)
measures (adt_log, adt_log field, adt_log old, adt_log new, 0 it_num )
rules iterate (50) -- until ?
adt_log[any] = adt_log[0],
field[0] = substr(adt_log[0], instr(adt_log[0],'FIELD',1,1)+6, instr(adt_log[0],',',1,1) - instr(adt_log[0],'FIELD',1,1)-6),
old[0] = substr(adt_log[0], instr(adt_log[0],'OLD',1,1)+4, instr(adt_log[0],',',1,2) - instr(adt_log[0],'OLD',1,1)-4),
new[0] = substr(adt_log[0], instr(adt_log[0],'NEW',1,1)+4, instr(adt_log[0],';',1,1) - instr(adt_log[0],'NEW',1,1)-4),
field[iteration_number ] = substr(adt_log[0],
instr(adt_log[0],'FIELD:',1,iteration_number + 1 ) + 6,
(instr(adt_log[0],',',( instr(adt_log[0],'FIELD:',1,iteration_number + 1 ) + 6 ),1)
( instr(adt_log[0],'FIELD:',1,iteration_number + 1 ) + 6))
old[iteration_number ] = substr(adt_log[0],
instr(adt_log[0],'OLD:',1,iteration_number + 1 ) + 4,
(instr(adt_log[0],',',( instr(adt_log[0],'OLD:',1,iteration_number + 1 ) + 4 ),1)
( instr(adt_log[0],'OLD:',1,iteration_number + 1 ) + 4))
new[iteration_number] = substr(adt_log[0],
instr(adt_log[0],'NEW:',1,iteration_number + 1 ) + 4,
(instr(adt_log[0],';',1,iteration_number + 1)
(instr(adt_log[0],'NEW:',1,iteration_number + 1 ) + 4)
order by evnt_id, it_num
where new is not nullEdited by: pollywog on Apr 13, 2010 10:28 AM -
Changing rows into different column names
Hi,
i need to tranpose the rows into differnent column names
my sample data :
id , val
1 3
1 4
1 5
into
id , val1, val2 , val3 , val4 ... valn ..
1 3 4 5
from askTom's i see that it's tranpose into a single column using the ref cursor ?
how can i do made it into different column names ?
kindly advise
tks & rdgsFor example, lets say that you want to order your columns from least value to greatest and that you'll never have more than three values per id. Then you can use the analytic function row_number() like this to create a pivot value.
select id, val,
row_number() over (partition by id order by val) as rn
from your_table;And so your pivot query ends up looking like this.
select id,
max(case when rn=1 then val end) AS val1,
max(case when rn=2 then val end) AS val2,
max(case when rn=3 then val end) AS val3
from (
select id, val,
row_number() over (partition by id order by val) as rn
from your_table
group by id;But notice that I started out by making up answers to Justin's questions. You'll have to supply the real answers. -
When I import a text file(comma separated) into a numbers spreadsheet all the data goes into one column instead of individual columns based on the comma separators. Excel allows you to do this during the import.. Is there a way to accomplish this in numbers without opening it in Excel and the importing into Numbers.
Your user info says iPad. This is the OS X Numbers forum. Assuming you are using OS X… Be sure the file is named with a .csv suffix.
(I don't have an iPad, so I don't know the iOS answer.) -
Compare dates in a different columns
Hi All,
How to get the largest date out from different columns.
here is my query....
select * from
(select date1 from table1) a,
(select date2 from table2) b,
(select date3 from table3) c
I want to get the largest date among date1, date2 and date3
thank you in advanceHi,
I think the following query helps to you.....
SELECT GREATEST(a,b,c) FROM(
SELECT
(SELECT MAX(SYSDATE+1) FROM EMP WHERE EMPNO=D.EMPNO) as a,
(SELECT MAX(SYSDATE+2) FROM EMP WHERE EMPNO=D.EMPNO) as b ,
(SELECT MAX(SYSDATE+3) FROM EMP WHERE EMPNO=D.EMPNO )AS C FROM EMP D WHERE EMPNO=7698)Regards
Reddy. -
How to parse a delimited string and insert into different columns?
Hi Experts,
I need to parse a delimited string ':level1_value:level2_value:level3_value:...' to 'level1_value', 'level2_value', etc., and insert them into different columns of one table as one row:
Table_Level (Level1, Level2, Level3, ...)
I know I can use substr and instr to get level value one by one and insert into Table, but I'm wondering if there's better ways to do it?
Thanks!user9954260 wrote:
However, there is one tiny problem - the delimiter from the source system is a '|' When I replace your test query with | as delimiter instead of the : it fails. Interestingly, if I use ; it works. See below:
with t as (
select 'str1|str2|str3||str5|str6' x from dual union all
select '|str2|str3|str4|str5|str6' from dual union all
select 'str1|str2|str3|str4|str5|' from dual union all
select 'str1|str2|||str5|str6' from dual)
select x,
regexp_replace(x,'^([^|]*).*$','\1') y1,
regexp_replace(x,'^[^|]*|([^|]*).*$','\1') y2,
regexp_replace(x,'^([^|]*|){2}([^|]*).*$','\2') y3,
regexp_replace(x,'^([^|]*|){3}([^|]*).*$','\2') y4,
regexp_replace(x,'^([^|]*|){4}([^|]*).*$','\2') y5,
regexp_replace(x,'^([^|]*|){5}([^|]*).*$','\2') y6
from t;
The "bar" or "pipe" symbol is a special character, also called a metacharacter.
If you want to use it as a literal in a regular expression, you will need to escape it with a backslash character (\).
Here's the solution -
test@ORA11G>
test@ORA11G> --
test@ORA11G> with t as (
2 select 'str1|str2|str3||str5|str6' x from dual union all
3 select '|str2|str3|str4|str5|str6' from dual union all
4 select 'str1|str2|str3|str4|str5|' from dual union all
5 select 'str1|str2|||str5|str6' from dual)
6 --
7 select x,
8 regexp_replace(x,'^([^|]*).*$','\1') y1,
9 regexp_replace(x,'^[^|]*\|([^|]*).*$','\1') y2,
10 regexp_replace(x,'^([^|]*\|){2}([^|]*).*$','\2') y3,
11 regexp_replace(x,'^([^|]*\|){3}([^|]*).*$','\2') y4,
12 regexp_replace(x,'^([^|]*\|){4}([^|]*).*$','\2') y5,
13 regexp_replace(x,'^([^|]*\|){5}([^|]*).*$','\2') y6
14 from t;
X Y1 Y2 Y3 Y4 Y5 Y6
str1|str2|str3||str5|str6 str1 str2 str3 str5 str6
|str2|str3|str4|str5|str6 str2 str3 str4 str5 str6
str1|str2|str3|str4|str5| str1 str2 str3 str4 str5
str1|str2|||str5|str6 str1 str2 str5 str6
4 rows selected.
test@ORA11G>
test@ORA11G>isotope
PS - it works for semi-colon character ";" because it is not a metacharacter. So its literal value is considered by the regex engine for matching.
Edited by: isotope on Feb 26, 2010 11:09 AM -
Aggregating data loaded into different hierarchy levels
I have some problems when i try to aggregate a variable called PRUEBA2_IMPORTE dimensinated by time dimension (parent-child type).
I read the help in DML Reference of the OLAP Worksheet and it said the follow:
When data is loaded into dimension values that are at different levels of a hierarchy, then you need to be careful in how you set status in the PRECOMPUTE clause in a RELATION statement in your aggregation specification. Suppose that a time dimension has a hierarchy with three levels: months aggregate into quarters, and quarters aggregate into years. Some data is loaded into month dimension values, while other data is loaded into quarter dimension values. For example, Q1 is the parent of January, February, and March. Data for March is loaded into the March dimension value. But the sum of data for January and February is loaded directly into the Q1 dimension value. In fact, the January and February dimension values contain NA values instead of data. Your goal is to add the data in March to the data in Q1. When you attempt to aggregate January, February, and March into Q1, the data in March will simply replace the data in Q1. When this happens, Q1 will only contain the March data instead of the sum of January, February, and March. To aggregate data that is loaded into different levels of a hierarchy, create a valueset for only those dimension values that contain data. DEFINE all_but_q4 VALUESET time
LIMIT all_but_q4 TO ALL
LIMIT all_but_q4 REMOVE 'Q4'
Within the aggregation specification, use that valueset to specify that the detail-level data should be added to the data that already exists in its parent, Q1, as shown in the following statement. RELATION time.r PRECOMPUTE (all_but_q4)
How to do it this for more than one dimension?
Above i wrote my case of study:
DEFINE T_TIME DIMENSION TEXT
T_TIME
200401
200402
200403
200404
200405
200406
200407
200408
200409
200410
200411
2004
200412
200501
200502
200503
200504
200505
200506
200507
200508
200509
200510
200511
2005
200512
DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
-----------T_TIME_HIERLIST-------------
T_TIME H_TIME
200401 2004
200402 2004
200403 2004
200404 2004
200405 2004
200406 2004
200407 2004
200408 2004
200409 2004
200410 2004
200411 2004
2004 NA
200412 2004
200501 2005
200502 2005
200503 2005
200504 2005
200505 2005
200506 2005
200507 2005
200508 2005
200509 2005
200510 2005
200511 2005
2005 NA
200512 2005
DEFINE PRUEBA2_IMPORTE FORMULA DECIMAL <T_TIME>
EQ -
aggregate(this_aw!PRUEBA2_IMPORTE_STORED using this_aw!OBJ262568349 -
COUNTVAR this_aw!PRUEBA2_IMPORTE_COUNTVAR)
T_TIME PRUEBA2_IMPORTE
200401 NA
200402 NA
200403 2,00
200404 2,00
200405 NA
200406 NA
200407 NA
200408 NA
200409 NA
200410 NA
200411 NA
2004 4,00 ---> here its right!! but...
200412 NA
200501 5,00
200502 15,00
200503 NA
200504 NA
200505 NA
200506 NA
200507 NA
200508 NA
200509 NA
200510 NA
200511 NA
2005 10,00 ---> here must be 30,00 not 10,00
200512 NA
DEFINE PRUEBA2_IMPORTE_STORED VARIABLE DECIMAL <T_TIME>
T_TIME PRUEBA2_IMPORTE_STORED
200401 NA
200402 NA
200403 NA
200404 NA
200405 NA
200406 NA
200407 NA
200408 NA
200409 NA
200410 NA
200411 NA
2004 NA
200412 NA
200501 5,00
200502 15,00
200503 NA
200504 NA
200505 NA
200506 NA
200507 NA
200508 NA
200509 NA
200510 NA
200511 NA
2005 10,00
200512 NA
DEFINE OBJ262568349 AGGMAP
AGGMAP
RELATION this_aw!T_TIME_PARENTREL(this_aw!T_TIME_AGGRHIER_VSET1) PRECOMPUTE(this_aw!T_TIME_AGGRDIM_VSET1) OPERATOR SUM -
args DIVIDEBYZERO YES DECIMALOVERFLOW YES NASKIP YES
AGGINDEX NO
CACHE NONE
END
DEFINE T_TIME_AGGRHIER_VSET1 VALUESET T_TIME_HIERLIST
T_TIME_AGGRHIER_VSET1 = (H_TIME)
DEFINE T_TIME_AGGRDIM_VSET1 VALUESET T_TIME
T_TIME_AGGRDIM_VSET1 = (2005)
Regards,
Mel.Mel,
There are several different types of "data loaded into different hierarchy levels" and the aproach to solving the issue is different depending on the needs of the application.
1. Data is loaded symmetrically at uniform mixed levels. Example would include loading data at "quarter" in historical years, but at "month" in the current year, it does /not/ include data loaded at both quarter and month within the same calendar period.
= solved by the setting of status, or in 10.2 or later with the load_status clause of the aggmap.
2. Data is loaded at both a detail level and it's ancestor, as in your example case.
= the aggregate command overwrites aggregate values based on the values of the children, this is the only repeatable thing that it can do. The recomended way to solve this problem is to create 'self' nodes in the hierarchy representing the data loaded at the aggregate level, which is then added as one of the children of the aggregate node. This enables repeatable calculation as well as auditability of the resultant value.
Also note the difference in behavior between the aggregate command and the aggregate function. In your example the aggregate function looks at '2005', finds a value and returns it for a result of 10, the aggregate command would recalculate based on january and february for a result of 20.
To solve your usage case I would suggest a hierarchy that looks more like this:
DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
-----------T_TIME_HIERLIST-------------
T_TIME H_TIME
200401 2004
200402 2004
200403 2004
200404 2004
200405 2004
200406 2004
200407 2004
200408 2004
200409 2004
200410 2004
200411 2004
200412 2004
2004_SELF 2004
2004 NA
200501 2005
200502 2005
200503 2005
200504 2005
200505 2005
200506 2005
200507 2005
200508 2005
200509 2005
200510 2005
200511 2005
200512 2005
2005_SELF 2005
2005 NA
Resulting in the following cube:
T_TIME PRUEBA2_IMPORTE
200401 NA
200402 NA
200403 2,00
200404 2,00
200405 NA
200406 NA
200407 NA
200408 NA
200409 NA
200410 NA
200411 NA
200412 NA
2004_SELF NA
2004 4,00
200501 5,00
200502 15,00
200503 NA
200504 NA
200505 NA
200506 NA
200507 NA
200508 NA
200509 NA
200510 NA
200511 NA
200512 NA
2005_SELF 10,00
2005 30,00
3. Data is loaded at a level based upon another dimension; for example product being loaded at 'UPC' in EMEA, but at 'BRAND' in APAC.
= this can currently only be solved by issuing multiple aggregate commands to aggregate the different regions with different input status, which unfortunately means that it is not compatable with compressed composites. We will likely add better support for this case in future releases.
4. Data is loaded at both an aggregate level and a detail level, but the calculation is more complicated than a simple SUM operator.
= often requires the use of ALLOCATE in order to push the data to the leaves in order to correctly calculate the aggregate values during aggregation. -
ALV -Excel download -header & data comming in Different columns
Hi,
I have an ALV report with FM.When I download it to excel through LIST/EXPORT/LOCALFILE some columns are getting messed up (ex: header is in one column and data is comming in different column).
When i do with LIST/EXPORT/SPREADSHEET option it is working fine.
but users want to use the first option only.
I am use FM's in this report.
Regards
PraveenHi
What is the statement you used to dowload data .Check the field catalog that u are passing to downloaad function module.
or check in the debuging mode..
change these options in excel sheet and checkout if it can help u
1. Go to Tools -> Macro -> Security in Excel
2. Select the Trusted Sources tab and ensure that the checkbox titled Trust access to Visual Basic Project is ticked.
3. With the feature switched on, the data is passed to Excel.
check this sample one
TABLES : MAST , "Material to BOM Link
STKO , "BOM Header
MARA . "General Material Data
* Types Begin with TY_ *
TYPES : BEGIN OF TY_MASTER ,
MATNR TYPE MAST-MATNR , "Material Number
WERKS TYPE MAST-WERKS , "Plant
STLAN TYPE MAST-STLAN , "BOM Usage
STLNR TYPE MAST-STLNR , "Bill of material
STLAL TYPE MAST-STLAL , "Alternative BOM
ANDAT TYPE MAST-ANDAT , "Date record created on
AEDAT TYPE MAST-AEDAT , "Date of Last Change
AENAM TYPE MAST-AENAM , "Name of Person Who Changed Object
STLST TYPE STKO-STLST , "BOM status
ZPLP1 TYPE MBEW-ZPLP1 , "Future Planned Price 1
DWERK TYPE MVKE-DWERK , "Delivering Plant (Own or External)
END OF TY_MASTER .
TYPES : MY_TYPE(20) TYPE C.
* Constants Begin with C_ *
* Internal tables Begin with IT_ *
DATA : IT_MASTER TYPE STANDARD TABLE OF TY_MASTER,
WA_MASTER TYPE TY_MASTER .
DATA : IT_HEADER TYPE TABLE OF MY_TYPE.
* Data Begin with W_ *
DATA : W_PTH TYPE RLGRAP-FILENAME.
DATA : W_FILE TYPE RLGRAP-FILENAME.
* Field Symbols Begin with FS_ *
* Select Options Begin with SO_ *
* Parameter Begin with PR_ *
* I N I T I A L I Z A T I O N *
*--- Add Header Fields to Header Table ---
APPEND 'Material Number' TO IT_HEADER .
APPEND 'Plant' TO IT_HEADER .
APPEND 'BOM Usage' TO IT_HEADER .
APPEND 'Bill Code' TO IT_HEADER .
APPEND 'Alternative BOM' TO IT_HEADER .
APPEND 'Created On' TO IT_HEADER .
APPEND 'Changed On' TO IT_HEADER .
APPEND 'Changed By' TO IT_HEADER .
APPEND 'BOM Status' TO IT_HEADER .
APPEND 'Planned Price' TO IT_HEADER .
APPEND 'Delivery Plant' TO IT_HEADER .
IF SY-MANDT = '700'.
W_PTH = '\lkdb01ISDISSoftware DevelopmentsDevelopmentsData FilesSAP DumpsBOM_Available'.
ELSE.
W_PTH = 'C:'.
ENDIF.
* A T S E L E C T I O N S C R E E N *
* s t a r t o f s e l e c t i o n
START-OF-SELECTION.
*--- Load Data to Internal Table ---
* SELECT MAST~MATNR MAST~WERKS MAST~STLAN MAST~STLNR MAST~STLAL MAST~ANDAT MAST~AEDAT MAST~AENAM STKO~STLST
* INTO TABLE IT_MASTER
* FROM MAST
* INNER JOIN STKO ON STKO~STLNR EQ MAST~STLNR
* AND STKO~STLAL EQ MAST~STLAL
* INNER JOIN MARA ON MARA~MATNR EQ MAST~MATNR
* WHERE MARA~MTART LIKE 'ZFG%'
* AND STKO~LKENZ NE 'X'
* AND STKO~LOEKZ NE 'X'
* AND STKO~STLST EQ '1'.
SELECT MAST~MATNR MAST~WERKS MAST~STLAN MAST~STLNR MAST~STLAL MAST~ANDAT MAST~AEDAT MAST~AENAM STKO~STLST MBEW~ZPLP1 MVKE~DWERK
INTO TABLE IT_MASTER
FROM MAST
INNER JOIN STKO ON STKO~STLNR EQ MAST~STLNR
AND STKO~STLAL EQ MAST~STLAL
INNER JOIN MARA ON MARA~MATNR EQ MAST~MATNR
INNER JOIN MBEW ON MBEW~MATNR EQ MAST~MATNR
AND MBEW~BWKEY EQ MAST~WERKS
INNER JOIN MVKE ON MVKE~MATNR EQ MAST~MATNR
WHERE MARA~MTART LIKE 'ZFG%'
AND STKO~LKENZ NE 'X'
AND STKO~LOEKZ NE 'X'
AND STKO~STLST EQ '1'.
IF SY-SUBRC <> 0.
MESSAGE I014(ZLOAD).
ENDIF.
*--- Set Path to Function Module ---
CONCATENATE W_PTH SY-DATUM ' - ' 'BOM_AVAILABLE_PLANT.XLS' INTO W_FILE.
CALL FUNCTION 'WS_DOWNLOAD'
EXPORTING
FILENAME = W_FILE
FILETYPE = 'DAT'
TABLES
DATA_TAB = IT_MASTER
FIELDNAMES = IT_HEADER
EXCEPTIONS
FILE_OPEN_ERROR = 1
FILE_WRITE_ERROR = 2
INVALID_FILESIZE = 3
INVALID_TYPE = 4
NO_BATCH = 5
UNKNOWN_ERROR = 6
INVALID_TABLE_EIDTH = 7
GUI_REFUSE_FILETRANSFER = 8
CUSTOMER_ERROR = 9
OTHERS = 10.
IF SY-SUBRC = 0.
SUBMIT ZI005_MARA_DUMP_SOLIDEAL_N.
MESSAGE I023(ZLOAD) WITH text-001.
ELSE.
MESSAGE I022(ZLOAD) WITH W_FILE. "Errors while downloading.
ENDIF.
END-OF-SELECTION.
SUBMIT ZI005_MARA_DUMP_SOLIDEAL_N.
Reward all helpfull answers
Regards
Pavan
Message was edited by:
Pavan praveen -
Insert data 32K into a column of type LONG using the oracle server side jdbc driver
Hi,
I need to insert data of more than 32k into a
column of type LONG.
I use the following code:
String s = "larger then 32K";
PreparedStatement pstmt = dbcon.prepareStatement(
"INSERT INTO TEST (LO) VALUES (?)");
pstmt.setCharacterStream(1, new StringReader(s), s.length());
pstmt.executeUpdate();
dbcon.commit();
If I use the "standard" oracle thin client driver from classes_12.zip ("jdbc:oracle:thin:@kn7:1521:kn7a") every thing is working fine. But if I use the oracle server side jdbc driver ("jdbc:default:connection:") I get the exception java.sql.SQLException:
Datasize larger then max. datasize for this type: oracle.jdbc.kprb.KprbDBStatement@50f4f46c
even if the string s exceeds a length of 32767 bytes.
I'm afraid it has something to do with the 32K limitation in PL/SQL but in fact we do not use any PL/SQL code in this case.
What can we do? Using LOB's is not an option because we have client software written in 3rd party 4gl language that is unable to handle LOB's.
Any idea would be appreciated.
Thomas Stiegler
nullIn rdbms 8.1.7 "relnotes" folder, there is a "Readme_JDBC.txt" file (on win nt) stating
Known Problems/Limitations In This Release
<entries 1 through 3 omiited for brevity >
4. The Server-side Internal Driver has the following limitation:
- Data access for LONG and LONG RAW types is limited to 32K of
data. -
Splitting data up into two columns
Apologies if this is covered previously ( i'm sure it probably is) i've imported .csv into numbers but the first cell contains two pieces of information separated by a space (TI NUMEROLOGY etc) how do I separate into two cells so I can sort data by the different identifers (TI = Title)
IB 1859060196
BI PAPERBACK
AU SHINE
BC VXFN
CO UK
PD 19990923
NP 128
RP 9.99
RI 9.99
RE 9.99
PU CONNECTIONS BOOK PUBLISHING
YP 1999
TI NUMEROLOGY
TI YOUR CHARACTER AND FUTURE REVEALED IN NUMBERS
EA 9781859060193
RF R
SG 2
GC M01
DE A unique step-by-step visual approach to numerology
DE characters and compatibility from names and birth dates.Hi Alan,
Another thought. You can sort the data as it is. Click on Reorganize button on the ToolBar:
To get:
Every cell starting with TI and a space will come together, and sorted by whatever follows TI and a space.
However, if it looks nicer to split the entries, use Badunit's formula.
Alan wrote: "I also clicked on C2 and went to insert/fill/ but the options are greyed out o I can'r apply to all unless I do it manually... and it's a mahoosive file."
To fill down, click on C2 then drag the white handle down. Or, select the rows that you want to fill before Insert > Fill (Numbers needs to know how far to fill).
Ian. -
Mapping different same column value into different column based on conditio
Hi experts,
We have source as Oracle tables and target also Oracel tables.
Source
Column1|Column2|Column3
Target
trg1|trg2
I need to
insert the value of Column1 into trg1 where column2='xxx'
insert the value of column1 into trg2 where column3='yyy'
After giving the conditions on the mapping, the where conditions are getting clubed with and and I am getting different value than what is excepted. Like the similar way I have the same scenario for most of the columns in my target table. Suggest me how to do it.
Thanks in advance.Hi,
I tried the mapping but getting duplicate records.
case1: Only AND condtion is getting applied automatically as per the joins between tables..i m not able to change the AND to OR condition since there are three table joins.
case2: when i try to execute the separate query for the column, i getting the values once put in case statement only NULL values are getting fetched.
As all my target mapping are like this...lot of issues are getting raised while fetching records...
I have given my scenario once again below.
src1
col1
src2
col1
col2
col3
col4
src3
col1
tar1
col1
col2
mapping given are
tar1.col1 = select src2.col3 where src2.col1='xxx' and src2.col2='yyy' and src2.col4=src1.col1
tar1.col2 = select src2.col3 where src2.col1='aaa' and src2.col2='bbb' and src2.col4=scr2.col1
Kindly suggest how to do the mapping in details...I am stuck bcoz of this mapping......thanks in advance.
Edited by: siva on Nov 23, 2011 12:34 PM -
Separate addresses into different columns in numbers
I am migrating some customer data from Front Desk into Infusionsoft. Front Desk has exported the addresses into a single column but I need the addresses separated into 4 separate columns: Street Address, City, State, Zip Code. How can I accomplish this?
I see the problem. That's not so easy because the addresses are not uniform. Some have a comma between address and city, and some have a linefeed.
If you are using Numbers 3 then the script below should help. To use it:
Copy-paste the script into Script Editor (in Applications > Utilities)
Select the cells in the column with the addresses you want to split
With the cells selected click the "run" button in Script Editor, and wait for the notification to paste.
Click once in the top-left cell of the range where you want to paste the values.
Type command-v to paste.
If all goes well, you should get something like this:
If you have zip codes with leading zeros, first format that column as text before pasting the results from the script.
It's better to not have blank rows in the middle of the data, but the script may be able to handle that gracefully.
If you have problems, post a screenshot of results and some adjustments to the script should do the trick. This works in Numbers 3. If you're still using Numbers 2, then the script will need modification.
SG
tell application "Numbers"
tell document 1's active sheet
tell (first table whose selection range's class is range)
tell selection range
set pasteStr to ""
repeat with c in cells
set v to c's value
set pasteStr to pasteStr & my parseAddress(v)
end repeat
end tell
end tell
end tell
end tell
set the clipboard to pasteStr
display notification "Click a cell once and command-v to paste"
to parseAddress(s)
try
set zip to s's word -1 -- last "word"
set state to s's word -2 -- second to last word
set AppleScript's text item delimiters to {",", linefeed}
set sParts to s's text items
if sParts's length = 3 then
set street to sParts's item 1
set city to sParts's item 2
else
set street to sParts's item 1 & " " & sParts's item 2
set city to sParts's item 3
end if
set AppleScript's text item delimiters to ""
return street & tab & city & tab & state & tab & zip & return
on error
return return -- a "blank" for that line
end try
end parseAddress
Maybe you are looking for
-
Why are photo attachments received on my ipad not looking correct?
Often when I receive a photo attachment in an email the photos have lines across them and different colors of hazy film or shading or bars in them. When I open the same email and photo attachment on my iMac or a desktop computer the photos are fine.
-
I just bought a new iphone 5c typed in a passcode went back to type the passcode and it's 'invalid now the phone is 'disabled any ideas how to reset the phone thank you
-
DHCP - Cannot add text option for VOIP phones in OES Linux
While working through this, I solved the issue, but decided to post this anyway as it may help others to find these sorts of errors. I'm working on migrating from NetWare 6.5sp8 to OES11sp2. Client has Shoretel VOIP phones. Existing NetWare-based DHC
-
Virus in SOA distribution for Windows?
I installed the SOA suite using: soa_windows_x86_101310_disk1.zip And was soon thereafter warned by McAfee antivirus that I had a problem. I had`McAfee search my whole disk, and it said I had exactly one file with a problem, namely: c:\product\10.1.3
-
MacBook Pro slow shutdown after installing SSD drive
Hi, Just got a new 256GB SSD drive and installed it in my Macbook Pro, super super fast might I add. Startup in a couple of seconds, and apps open very fast. The only problem is the shutdown time, it takes over a minute to shutdown, really strange. I