IN clause with more than 1 column, possible?
Hi,
I have an existing (simplied) query:
SELECT sum(bandwidth)
FROM a, b
WHERE a.tag = b.tag AND a.id IN
(SELECT id FROM ...);
If the subquery returns an id, say, twice, I would like the bandwidth to be added twice. E.g. Suppose the query returns (12, 12), then I would like the bandwidth added twice, instead of once as is currently.
Is this not possible for the "IN" clause? So without resorting to stored functions and cursors, etc, is there a pure SQL way to do so?? TIA!
Thanks all for speedy reply and helping me found a solution.
Yes, using a simple subquery / join works just fine.
I had tried that before I post the question but I guess the subquery has to go BEFORE the tables?? E.g. I had this before and it had errors and I gave up:
SELECT sum(bandwidth)
FROM a, b, (SELECT id FROM ...) p
WHERE a.tag = b.tag
AND a.id = p.id;
After your suggestions I went back and switch the order :
SELECT sum(bandwidth)
FROM (SELECT id FROM ...) p, a, b
WHERE a.tag = b.tag
AND a.id = p.id;
and now it works!
Thanks much!
Similar Messages
-
Table with more than 35 columns
Hello All.
How can one work with a table with more than 35 columns
on JDev 9.0.3.3?
My other question is related to this.
Setting Entities's Beans properties from a Session Bean
bought up the error, but when setting from inside the EJB,
the bug stays clear.
Is this right?
Thank youThank you all for reply.
Here's my problem:
I have an AS400/DB2 Database, a huge and an old one.
There is many COBOL Programs used to communicate with this DB.
My project is to transfer the database with the same structure and the same contents to a Linux/ORACLE System.
I will not remake the COBOL Programs. I will use the existing one on the Linux System.
So the tables of the new DB should be the same as the old one.
That’s why I can not make a relational DB. I have to make an exact migration.
Unfortunately I have some tables with more than 5000 COLUMNS.
Now my question is:
can I modify the parameters of the ORACE DB to make it accept Tables and Views with more than 1000 columns, If not, is it possible to make a PL/SQL Function that simulate a table, this function will insert/update/select data from many other small tables (<1000 columns). I want to say a method that make the ORACLE DB acting like if it has a table with a huge number of columns;
I know it's crazy but any idea please. -
Having with more than one clause
Hi
Is posssible to have a query with more than one clause in having condition
Example In my query I have Count , Sum and AVG , I need to use 3 conditions in having, Is It possible ?
Thank you in advanceHi,
yes, in Having you can also use AND and OR.
with x as (select 1 nr from dual)
select nr
from x
group by nr
having count(*) = 1
and sum(nr) = 1Herald ten Dam
http://htendam.wordpress.com -
Create a logical column with more than one data source
I'm having a problem to create a logical column with more than one data source in Siebel 7.8.
What I want to do is the union of 2 physical tables in one logical table.
For example, I have a "local_clients" table and a "abroad_clients" table. What I want is to have a logical table "clients" with the client data from the 2 tables.
What I've tried is dragging the datasources I need onto the logical column.
However this isn't working because it only retrieves the data from the first data source.Hi!
I think it is not possible to do this just by dragging the columns to the logical table. A logical table can have more than one source, but I think each column must have just one direct source column.
I'm not sure, but maybe you should do the UNION SQL to get the data of the two tables. In the physical layer, when you create a new physical table, it's possible to set the "table type" as a "SELECT". I didn't try that, but it seems that it's possible to have the union table in the physical layer.
Bye.
Message was edited by:
user578388 -
Need help: Dimensional column has assoc with more than 1 level
Hi,
I am trying to have multiple drill paths for a single dimension. For example, the Date dimension should be navigable by the following hierarchies:
DateDimension
+---- Calendar Year
......---- Calendar Quarter
............--- Calendar Month
..................-- Day
+---- Calendar Year
......---- Day
So, in other words, in Answers I should be able to add 'Calendar Year' to my report, display results, and then click down the path Calendar Year -> Calendar Quarter -> Calendar Month -> Day OR Calendar Year -> Day.
However, when I model this in the Admin tool, it allows it, but then I add the column in Answers and get a runtime error:
+[nQSError: 14064] Dimensional column [column name] has associations with more than one level+
This is possible in BO and other tools. If it is not possible in Oracle BI EE, then one alternative would be to have two separate versions of 'Calendar Year' added to the report in Answers, each with their own drill path.
Any ideas???
Thanks for any help you can provide.
Matt Warden
Balanced Insight, Inc.Hi,
You try this one .
Create the Hierarchy Year --> Quarter--->Month-->Week-->Day in rpd as usual manner .But in Year report use navigate option in column properties add Quarterly Report (Caption it as 'Quarter')and Day level report(Caption it as 'Day') in navigate option.So when you click on Year column it i will prompt to select either Quarter or Day and when u select Quarter u will have normal drill functionality from Quarter to Month to Week and Day.Hope this helps you.
Thanks. -
By subscribing to Creative Cloud(Photoshop and Lightroom), does it come with more than one license, and if it does, is it possible to install on both Windows and Apple's OX? Thanks.
A Cloud subscription provides for installing working installations on two machines. You can have mixed operating systems (both Windows and Apple's OX).
-
Is there a way to open CSV files with more than 255 columns?
I have a CSV file with more than 255 columns of data. It's a fairly standard export of social media data that shows volume of posts by day for the past year, from which I can analyze the data and publish customized charts. Very easy in Excel but I'm hitting the Numbers limit of 255 columns per table. Is there a way to work around the limitation? Perhaps splitting the CSV in two? The data shows up in the CSV file when I open via TextEdit, so it's there. Just can't access it in Numbers. And it's not very usable/useful for me in TextEdit.
Regards,
TimYou might be better off with Excel. Even if you could find a way to easily split the CSV file into two tables, it would be two tables when you want only one. You said you want to make charts from this data. While a series on a chart can be constructed from data in two different tables, to do so takes a few extra steps for each series on the chart.
For a test to see if you want to proceed, make two small tables with data spanning the tables and make a chart from that data. Make the chart the normal way using the data in the first table then repeat the following steps for each series
Select the series in the chart
Go to Format sidebar
Click in the "Value" box
Add a comma then select the data for this series from the second chart
Press Return
If there is an easier way to do this, maybe someone else will chime in with that info. -
Spatial index creation for table with more than one geometry columns?
I have table with more than one geometry columns.
I'v added in user_sdo_geom_metadata table record for every column in the table.
When I try to create spatial indexes over geometry columns in the table - i get error message:
ERROR at line 1:
ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
ORA-13203: failed to read USER_SDO_GEOM_METADATA table
ORA-13203: failed to read USER_SDO_GEOM_METADATA table
ORA-06512: at "MDSYS.SDO_INDEX_METHOD", line 8
ORA-06512: at line 1
What is the the solution?I'v got errors in my user_sdo_geom_metadata.
The problem does not exists! -
Report with more than 30 columns
XML Pub Gurus,
I have a requirement where I need to create a table layout with proper spacing for more than 30 columns and I asked this question in Open World XML Pub team and they responded to my question as "The page size in word has to be increased to allow to add those many columns". I was satisfied with the answer without knowing that Word will not allow more than 22" long in landscape. So, I am puzzled with the answer in Open World.
So, could somebody please give me guide lines to create this type of report for more than 30 Columns. This is very common requirement for us to report some of the stuff from the db.
Thanks in advance
Ram GWhat do you want your final destination format to be? I'm thinking you'll probably want to go out to HTML or Excel. I would think that would be the only place where you could read and use a report with more than 30 columns.
Here's an idea if you want to go to excel and if you are using the 10.1.3.2 Enterprise (standalone) version.
1. Create your query with as many columns as you want.
2. Don't create a layout at all
3. Launch Excel Analyzer.
4. From within Excel, build your report based off the downloaded data (for example add a pivot table).
5. Upload your excel Analyzer template back to the server
This template can now be scheduled and delivered like normal templates or updated with current data directly from within Excel. -
Attachment with more than 255 columns
Hi together,
i want to send a mail in background with an attachment with more than 255 columns through the function module SO_DOCUMENT_SEND_API1 . The required content-structure of this function module have 255 columns......
I try it also with the function module SO_DOCUMENT_REPOSITORY_MANAGER with the method 'SEND'
but i can't suppress the popup of this function module.
Have anybody an solution for me ?
br
Markus
Edited by: Markus Garyant on Aug 21, 2008 3:39 PMAttachement table has a strucutre SOLISTI1 which can contain only 255 characters BUT you can use the CL_ABAP_CHAR_UTILITIES=>CR_LF for the new line and CL_ABAP_CHAR_UTILITIES=>HORIZONTAL_TAB for the column separater.
You need to concatenate this Separters in the attachment table.
Check out this example:
http://www.sapdevelopment.co.uk/reporting/email/attach_xls.htm
Regards,
Naimesh Patel -
Row chaining in table with more than 255 columns
Hi,
I have a table with 1000 columns.
I saw the following citation: "Any table with more then 255 columns will have chained
rows (we break really wide tables up)."
If I insert a row populated with only the first 3 columns (the others are null), is a row chaining occurred?
I tried to insert a row described above and no row chaining occurred.
As I understand, a row chaining occurs in a table with 1000 columns only when the populated data increases
the block size OR when more than 255 columns are populated. Am I right?
Thanks
dyahavuser10952094 wrote:
Hi,
I have a table with 1000 columns.
I saw the following citation: "Any table with more then 255 columns will have chained
rows (we break really wide tables up)."
If I insert a row populated with only the first 3 columns (the others are null), is a row chaining occurred?
I tried to insert a row described above and no row chaining occurred.
As I understand, a row chaining occurs in a table with 1000 columns only when the populated data increases
the block size OR when more than 255 columns are populated. Am I right?
Thanks
dyahavYesterday, I stated this on the forum "Tables with more than 255 columns will always have chained rows." My statement needs clarification. It was based on the following:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/schema.htm#i4383
"Oracle Database can only store 255 columns in a row piece. Thus, if you insert a row into a table that has 1000 columns, then the database creates 4 row pieces, typically chained over multiple blocks."
And this paraphrase from "Practical Oracle 8i":
V$SYSSTAT will show increasing values for CONTINUED ROW FETCH as table rows are read for tables containing more than 255 columns.
Related information may also be found here:
http://download.oracle.com/docs/cd/B10501_01/server.920/a96524/c11schem.htm
"When a table has more than 255 columns, rows that have data after the 255th column are likely to be chained within the same block. This is called intra-block chaining. A chained row's pieces are chained together using the rowids of the pieces. With intra-block chaining, users receive all the data in the same block. If the row fits in the block, users do not see an effect in I/O performance, because no extra I/O operation is required to retrieve the rest of the row."
http://download.oracle.com/docs/html/B14340_01/data.htm
"For a table with several columns, the key question to consider is the (average) row length, not the number of columns. Having more than 255 columns in a table built with a smaller block size typically results in intrablock chaining.
Oracle stores multiple row pieces in the same block, but the overhead to maintain the column information is minimal as long as all row pieces fit in a single data block. If the rows don't fit in a single data block, you may consider using a larger database block size (or use multiple block sizes in the same database). "
Why not a test case?
Create a test table named T4 with 1000 columns.
With the table created, insert 1,000 rows into the table, populating the first 257 columns each with a random 3 byte string which should result in an average row length of about 771 bytes.
SPOOL C:\TESTME.TXT
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
INSERT INTO T4 (
COL1,
COL2,
COL3,
COL255,
COL256,
COL257)
SELECT
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3)
FROM
DUAL
CONNECT BY
LEVEL<=1000;
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
SET AUTOTRACE TRACEONLY STATISTICS
SELECT
FROM
T4;
SET AUTOTRACE OFF
SELECT
SN.NAME,
SN.STATISTIC#,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
SPOOL OFFWhat are the results of the above?
Before the insert:
NAME VALUE
table fetch continue 166
After the insert:
NAME VALUE
table fetch continue 166
After the select:
NAME STATISTIC# VALUE
table fetch continue 252 332 Another test, this time with an average row length of about 12 bytes:
DELETE FROM T4;
COMMIT;
SPOOL C:\TESTME2.TXT
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
INSERT INTO T4 (
COL1,
COL256,
COL257,
COL999)
SELECT
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3)
FROM
DUAL
CONNECT BY
LEVEL<=100000;
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
SET AUTOTRACE TRACEONLY STATISTICS
SELECT
FROM
T4;
SET AUTOTRACE OFF
SELECT
SN.NAME,
SN.STATISTIC#,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
SPOOL OFFWith 100,000 rows each containing about 12 bytes, what should the 'table fetch continued row' statistic show?
Before the insert:
NAME VALUE
table fetch continue 332
After the insert:
NAME VALUE
table fetch continue 332
After the select:
NAME STATISTIC# VALUE
table fetch continue 252 33695The final test only inserts data into the first 4 columns:
DELETE FROM T4;
COMMIT;
SPOOL C:\TESTME3.TXT
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
INSERT INTO T4 (
COL1,
COL2,
COL3,
COL4)
SELECT
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3)
FROM
DUAL
CONNECT BY
LEVEL<=100000;
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
SET AUTOTRACE TRACEONLY STATISTICS
SELECT
FROM
T4;
SET AUTOTRACE OFF
SELECT
SN.NAME,
SN.STATISTIC#,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
SPOOL OFFWhat should the 'table fetch continued row' show?
Before the insert:
NAME VALUE
table fetch continue 33695
After the insert:
NAME VALUE
table fetch continue 33695
After the select:
NAME STATISTIC# VALUE
table fetch continue 252 33695 My statement "Tables with more than 255 columns will always have chained rows." needs to be clarified:
"Tables with more than 255 columns will always have chained rows +(row pieces)+ if a column beyond column 255 is used, but the 'table fetch continued row' statistic +may+ only increase in value if the remaining row pieces are found in a different block."
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.
Edited by: Charles Hooper on Aug 5, 2009 9:52 AM
Paraphrase misspelled the view name "V$SYSSTAT", corrected a couple minor typos, and changed "will" to "may" in the closing paragraph as this appears to be the behavior based on the test case. -
Spool request with more than 255 columns
Hi,
Please let me know what formatting type has to be used to have spool output with more than 255 lines.
X_24_80_JP L ANY 00024 00080 ABAP list HR Japan: At least 24 rows by 80 columns
X_44_120 L ANY 00044 00120 ABAP/4 list: At least 44 rows by 120 columns
X_51_140_JP L ANY 00051 00140 ABAP list HR Japan: At least 51 rows by 140 columns
X_58_170 L ANY 00058 00170 ABAP/4 list: At least 58 rows by 170 columns
X_60_80_JP L ANY 00060 00080 ABAP list HR Japan: At least 60 rows by 80 columns
X_65_1024/4 L ANY 00065 01024 ABAP List: At Least 65 Lines 4*256=1024 Columns Four-Sided (Only for SAPlpd)
X_65_132 L ANY 00065 00132 ABAP list: At least 65 rows by 132 columns
X_65_132-2 L ANY 00065 00132 ABAP List: 2-column 65 characters 132 columns (only for SAPLPD from 4.15)
X_65_200 L ANY 00065 00200 ABAP list: at least 65 lines with 200 columns (not for all device types)
X_65_255 L ANY 00065 00255 ABAP/4 list: At least 65 rows with a maximum number of columns
X_65_256/2 L ANY 00065 00256 ABAP list: At least 65 lines 2*128=256 double columns (SAPLPD only)
X_65_512/2 L ANY 00065 00512 ABAP List: At least 65 Lines 2*256=512 Columns 2-sided (Only for SAPlpd)
X_65_80 L ANY 00065 00080 ABAP/4 list: At least 65 rows by 80 columns
X_65_80-2 L ANY 00065 00080 ABAP List: 2-column 65 characters 80 columns (only for SAPLPD from 4.15)
X_65_80-4 L ANY 00065 00080 ABAP List: 4-column 65 characters 80 columns (only for SAPLPD from 4.15)
X_90_120 L ANY 00090 00120 ABAP list: At least 90 rows by 120 columns
X_PAPER L ANY 00010 00010 ABAP/4 list: Default list formatting
X_PAPER_NT L ANY 00001 00001 ABAP/4 list: Obsolete (do not use)
X_POSTSCRIPT L ANY 00001 00001 Pre-prepared PostScript
X_SPOOLERR L ANY 00001 00001 ABAP list: Spooler problem report
X_TELEX L TELEX 00001 00001 Telex: 69 characters wide, only as many lines as supported by TTU
ZABC_SAP L ANY 00065 00550 LCM Report Page Type
I have created a custom Format Type with 65*550 (ZABC_SAP) , but still the output gets truncated in the spool.
In sp01 . For the spool request ... If it displayed in Graphical layoout ... Output is getting truncated but when we see in Raw format .. i can see the entire output. But it is not at all formatted.
Thanks,
Tanuj
Message was edited by:
Tanuj Kumar BolisettyHello Tanuj,
You need to use a page format greater than 255 columns for sure. However still if it does not solve the issue then you may consider using the note 186603.
PS: I guess you are on a higher release than 4.6 C . For this release this note íIt has a text attachment for a report tat allows to display such spool requests.
Regards.
Ruchit. -
Compressed tables with more than 255 columns
hi,
Would anyone have a sql to find out compressed tables with more than 255 columns.
Thank you
JonuSELECT table_name,
Count(column_name)
FROM user_tab_columns utc
WHERE utc.table_name IN (SELECT table_name
FROM user_tables
WHERE compression = 'ENABLED')
HAVING Count(column_name) > 255
GROUP BY table_name -
General Scenario- Adding columns into a table with more than 100 million rows
I was asked/given a scenario, what issues do you encounter when you try to add new columns to a table with more than 200 million rows? How do you overcome those?
Thanks in advance.
svkFor such a large table, it is better to add the new column to the end of the table to avoid any performance impact, as RSingh suggested.
Also avoid to use any default on the newly created statement, or SQL Server will have to fill up 200 million fields with this default value. If you need one, add an empty column and update the column by using small batches (otherwise you lock up the whole
table). Add the default after all the rows have a value for the new column. -
Reports with more than 100 columns
I am using Apex 3.2. I am using a classic report with more than 100 columns. So to use the custom heading in the report, I exported the page,did modification in exported sql file and imported back.
But when I am passing the parameters from a input screen for the first time either it is showing the all data avilable in the database or with only header. If I do the refresh in input screen and try it then it is working as intended. I am updating at the below loaction of the page.Let me know if I have to update anything more in the sql file.
declare
s varchar2(32767) := null;
begin
s := null;
wwv_flow_api.create_report_columns (
p_id=> 200100535534034253 + wwv_flow_api.g_id_offset,
p_region_id=> 2003453533453 + wwv_flow_api.g_id_offset,
p_flow_id=> wwv_flow.g_flow_id,
p_query_column_id=> 157,
p_form_element_id=> null,
p_column_alias=> 'XYZ',
p_column_display_sequence=> 157,
p_column_heading=> 'XYZ 123',
p_column_alignment=>'LEFT',
p_disable_sort_column=>'Y',
p_sum_column=> 'N',
p_hidden_column=> 'N',
p_display_as=>'WITHOUT_MODIFICATION',
p_pk_col_source=> s,
p_column_comment=>'');
end;
/Hi,
Its just a general thought and I realize you know your application and user community better I do, but do you think your users are going to be very happy when they are presented with a report with more than a hundred columns? Have you considered maybe presenting the data in some sort of rolled up form from which the user can then drill down to the data they are particularly interested in. Also, I'm sure your LAN administrator would be happy not to see 100+ column by x number row reports being regularly shipped across the network.
Also, 100+ column reports suggests tables with 100+ columns which are probably not designed in a very relationally compliant way. I find that good DB design usually results in applications that have to make less compromises, such as hacking export files in order to fool the API into making ridiculous, unsupported and unsupportable compromises.
Just a thought..................
Regards
Andre
Maybe you are looking for
-
Getting " Applicaiton not found " when running any webdynpro applicaiton.
HI All, I'm using SAP Netweaver Developer Studio 7.0.09. Previously it's working fine. But today i developed one application and select " Deploy New Archive and Run ". It showed the message deployed Successfully in server also. After that it's
-
Why isn't the 6th season of Friday Night Lights available on iTunes?
i dont understand why every other season is available on itunes except the last season.
-
I used an app that let's you delete a ton of contacts at once, I think it was called Cleanup. Now when I conntected my phone to my computer after deleting a few thousand contacts outlook synced my computer's contacts with my iPhone (not sure if this
-
Initiate from Watched Directory
Is it possible to initiate a workflow by creating a file in a watched directory and then reading that file as part of the workflow? I have a process that creates a simple CSV file of Purchase Order numbers in a directory. I would like to read in that
-
Creating Crawler impact rule through Powershell
Hi All, I want to create a "Crawler Impact rule" for one web application, I found below script but its not working or it may be not the full script. As i am new in Powershell script writing, please help me writing the full PS script or please suggest