Need suggestions for improving sql query.
Hi,
There is an sql query which loads data into a single table from 80 other tables which are loaded with data from different sites.The sql query to load from these 80 tables into single table is taking almost 40 min everyday with 6-7 million records.
The notation followed
the table is truncated first and then the constraints are disabled.
insert into single table (column1,column2,column3)
( select * from table1
union all
select * from table2
union all
select * from table3
union all
select * from table80);
enable the constraints.
The database is 10.2.0.3.Is there any other way that can be implemented or modified this to get a better response time to load the data faster.
Thanks
A lot of IFs, but
if you have a licence for the partitioning option, and
if the data has some convenient partitioning column - such as a 'site' column which could be used for list partitioning, and
if you don't need the separate tables to exist after you have consolidated the data
if you can create all other necessary constraints on the base tables,
you could:
create main table as empty, but partitioned with 80 empty partitions
exchange partition with table 80 times in a row.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
Similar Messages
-
In JDBC Sender Adapter , the server is Microsoft SQL .I need to pass current date as the input column while Executing stored procedure, which will get me 10 Output Columns. Kindly suggest me the SQL Query String , for executing the Stored Procedure with Current date as the input .
Hi Srinath,
The below blog might be useful
http://scn.sap.com/community/pi-and-soa-middleware/blog/2013/03/06/executing-stored-procedure-from-sender-adapter-in-sap-pi-71
PI/XI: Sender JDBC adapter for Oracle stored procedures in 5 days
regards,
Harish -
Suggestion for Improving Number
Hello Oracle Java community,
I've recently encountered some difficulties using the abstract class java.lang.Number, and have a suggestion for improvement.
I'm writing a class that computes statistical information on a list of numbers - it would be nice to not couple this class to Integer, Double, BigDecimal, or any other wrapper by using generics. I saw that there is a nice superclass that all Number objects inherit from.
I came up with:
public class Statistics<T extends Number> {
private List<T> data;
// statistical data that i wish to find and store, such as median, mean, standard dev, etc
public synchronized void setData(List<T> data) {
this.data = data;
if (this.data != null && !this.data.isEmpty()) calculateStatistics();
private void calculateStatistics() {
// Welcome to instanceof and casting hell...
h4. It would be nice to have richer functionality from the Number class, say to do mathematical operations with them or compare them.
h4. After all, in the real world it is possible to do so.
h4. Real numbers are much like BigDecimal. Why not take the idea of BigDecimal, and make that the parent of Integer, BigInteger, Double, Short, Byte, Float (I'm probably forgetting a few)? All of those are limited forms of real numbers. It would make comparison between Number datatypes easy, would probably remove all of that duplicated arithmetic code between all of the children of Number, and also allow Numbers to be used in powerful generic ways. The parent/replacement of BigDecimal could even be named RealNumber, which stays true to its math domain.
As a side note, I'm solving this problem by taking an initial step to convert the List<whatever type of Number that the user enters> into a List<BigDecimal> by getting the toString() value of each element when cast as a Number.
private List<BigDecimal> convertData(List<T> data) {
ArrayList<BigDecimal> converted = new ArrayList<BigDecimal>();
for (T element : data) {
converted.add(new BigDecimal(((Number) element).toString()));
return converted;
Criticism is always welcome.
Thanks for your time and thoughts.
-James GenacHow compareTo() came into existence is from Comparable interface. As I understand, Comparable came into existence since Collections API has sorting functions - which needs to be run with a matching Comparable object that knows how to determine which element is larger than the other (not limited to objects representing numbers, you might sort a list of Persons). Hence, compareTo() is not solely meant for the comparison of numbers. Existence of the method in BigDecimal is just one case.
Subclasses can override the equals() method, but that cannot be implemented in a cleaner manner and leads to a very poor design. For example, you might want to compare an Integer and a Float. So the Integer class's equals() method need to have some if-else structure to determine the other type and then compare. Same holds true for the Float class's equals() method as well. Ultimately, Everything becomes a mess. All subclasses of RealNumber needs to know about all other subclasses of RealNumber. And you will not be able to introduce new subtypes and expect the equals() method to work correctly.
To avoid this, you need to depend on a single representation form for all types of numbers. If that's the case, you might just live with something like BigDecimal and not use Byte, Float, Integer,... (which we kind of do in some cases - for example to represent monetary amounts). So we can live without Byte, Float, Integer,...
Then we need some utility classes that would contain some number type specific functions to work with primitives. So we will also have Byte, Float, Integer... unrelated to BigDecimal.
Clearly, the wrapper types are there not because of the need to represent real world number types, but because of the need to represent computer domain number types. Hence, they have been organized not according to relationships found in real world number types. Many of us find this way of modelling sufficient and have an understanding about the limitations. But if you need to model the real world number relationships for some special reason, you might write some new classes. Then again there will be real world aspects that you will not be able to model easily. So you will model some aspects and neglect the other. -
Any suggestions for improving my efficiency?
These are the two methods I've come up with to use what I have for making movies. One is for DVDs. The other is for making QuickTime MOV files for CDs. This is the process I have to use because we don't yet have our digital video camera that is firewire compatible with Final Cut.
For DVDs that will play in DVD players or media software on your computer:
1. I take the Video_TS folder and run it through DVD Imager (free, macupdate.com) which converts it into an IMG file.
2. I use the Apple Disk Utility (part of OS X) and burn the IMG file to a DVD.
Simple enough.
Making our recorded footage editable in Final Cut and then exporting as a QuickTime movie is a little more complicated. There may be a simpler way to do all this (like get a fire-wire FC-compatible camera I can capture footage from) but this is the process I finally got to work:
1. In the Video_TS folder are two VOB files. The larger one is the one that actually has your video on it. I use MPEG Streamclip (free, squared5.com) to remove the timebreaks (otherwise all you get is the poster frame) and convert it to a Quicktime MOV file. For settings, I just use Apple Video, 720x480 NTSC, and 30 fps. You need to buy the Apple Quicktime MPEG-2 Playback Component ($20, apple.com) for this free software to work.
2. Import the MOV file into Final Cut (I use Express which is $300 from apple.com) and do your editing and other yumminess. You'll need to render it first.
3. Export as an MOV file ... there's no .mov extension and the Info says it's a Final Cut Express Movie file, not a QT MOV which makes me nervous so I I open it in QuickTime Pro ($30, apple.com) and export it using the Movie to Quicktime Movie setting.
4. Then I burn my Quicktime movies to a CD.
Any suggestions for improving my efficiency?"For DVDs that will play in DVD players..."
If what you want is just to make copies of a DVD you burned yourself (eg using iDVD or your DVD camcorder) there is a simpler way: just create an image of the DVD on your desktop using Disk Utility, and then burn it using Disk Utility.
You need to go into the process of copying the VIDEO_TS folder only if you want to make changes to it. For example you might need myDVDEdit, a very powerful free editor of the DVD structure (to change the menu button behaviour, or so). Or maybe if the image is of a different size, from a small DVD to a large one.
Piero -
Need Suggestion for Archival of a Table Data
Hi guys,
I want to archive one of my large table. the structure of table is as below.
Daily there will be around 40000 rows inserted into the table.
Need suggestion for the same. will the partitioning help and on what basis?
CREATE TABLE IM_JMS_MESSAGES_CLOB_IN
LOAN_NUMBER VARCHAR2(10 BYTE),
LOAN_XML CLOB,
LOAN_UPDATE_DT TIMESTAMP(6),
JMS_TIMESTAMP TIMESTAMP(6),
INSERT_DT TIMESTAMP(6)
TABLESPACE DATA
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 1M
NEXT 1M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
LOGGING
LOB (LOAN_XML) STORE AS
( TABLESPACE DATA
ENABLE STORAGE IN ROW
CHUNK 8192
PCTVERSION 10
NOCACHE
STORAGE (
INITIAL 1M
NEXT 1M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
NOCACHE
NOPARALLEL;
do the needful.
regards,
SandeepThere will not be any updates /deletes on the table.
I have created a partitioned table with same struture and i am inserting the records from my original table to this partitioned table where i will maintain data for 6 months.
After loading the data from original table to archived table i will truncating the original table.
If my original table is partitioned then what about the restoring of the data??? how will restore the data of last month??? -
Need suggestion for designing a BEx report
Hi,
I need suggestions for designing a BEx report.
Iu2019ve a DSO with below structure:
1. Functional Location u2013 Key
2. Maintenance Plan u2013 Key
3. Maintenance Item u2013 Key
4. Call # - Key
5. Cycle u2013 Data Field
6. Planned Date u2013 Data Field
7. Completion Date u2013 Data Field
This DSO contains data like:
Functional -
Plan --- Item -
Call# --- Cycle -
Planned Dt -
Completion Dt
Location
11177 -
134 -
20 -
1 -
T1 -
02-Jan-2011 -
10-Jan-2011
11177 -
134 -
20 -
2 -
T2 -
15-Feb-2011 -
11177 -
134 -
20 -
3 -
T1 -
15-Mar-2011 -
11177 -
134 -
20 -
4 -
M1 -
30-Mar-2011 -
25000 -
170 -
145 -
1 -
T1 -
19-Jan-2011 -
19-Jan-2011
25000 -
134 -
145 -
2 -
T2 -
20-Feb-2011 -
25-Feb-2011
25000 -
134 -
145 -
3 -
T1 -
14-Mar-2011 -
Now Iu2019ve to create a report which will be executed at the end of every month and should display the list of Functional Locations whose Cycles were planned in that particular month, along with the last completed Cycle/Date.
Thus based upon above data, if I execute report at the end of (say) March then report must display:
Functional --- Curr. Cycle --- Planned Date --- Prev. completed Cycle --- Prev Completed Date
Location
11177 -
T1 -
15-Mar-2011 -
--- T1 -
-- 10-Jan-2011
11177 -
M1 -
30-Mar-2011 -
--- T1 -
-- 10-Jan-2011
25000 -
T1 -
14-Mar-2011 -
--- T2 -
-- 25-Feb-2011
Any idea how can I display Previous Completed Cycle and Completion Date (i.e. the last two columns)?
Regards,
Vikrant.hi vikrant,
You can a Cube at the reporting layer which gets data from DSO and which has these 2 extra characteristics completion date and previous cycle along with other chars and keyfigures from DSO.
You can populate these based on your logic in the field routine.
Hope it helps.
Regards
Dev -
Hi All,
I am looking for an SQL query to request the HDS database to find out which Directory Number / instrument was associated with a specific CTI OS agent login ID.
Has anyone done such a query before ?
Thanks and Regards
NickHi,
this should work in 8.0 and 8.5:
SELECT
ag.PeripheralNumber AS [LoginID],
al.Extension,
al.LogoutDateTime
FROM [instance]_hds.dbo.Agent_Logout al
JOIN [instance]_awdb.dbo.Agent ag ON al.SkillTargetID = ag.SkillTargetID
Of course, replace [instance] with the ICM instance.
The query returns a table with three columns, first is the login ID aka PeripheralNumber, Extension is... well, the agent's extension, and LogoutDateTime is the timestamp when the agent logged out.
G. -
Hi Team,
I am looking for an SQL query to check the data (ECC + CallVariable) received following a RUN SCRIPT RESULT when requesting an external VRU with a Translation Route to VRU with a "Run External Script".
I believe the data are parsed between the Termination Call Detail + Termination Call Variable .
If you already have such an SQL query I would very much appreciate to have it.
Thank you and Regards
NickOmar,
with all due respect, shortening a one day's interval might not be an option for a historical report ;-)
I would recommend to take a look the following SQL query:
DECLARE @dateFrom DATETIME, @dateTo DATETIME
SET @dateFrom = '2014-01-24 00:00:00'
SET @dateTo = '2014-01-25 00:00:00'
SELECT
tcv.DateTime,
tcd.RecoveryKey,
tcd.RouterCallKeyDay,
tcd.RouterCallKey,
ecv.EnterpriseName AS [ECVEnterpriseName],
tcv.ArrayIndex,
tcv.ECCValue
FROM Termination_Call_Variable tcv
JOIN
(SELECT RouterCallKeyDay,RouterCallKey,RecoveryKey FROM Termination_Call_Detail WHERE DateTime > @dateFrom AND DateTime < @dateTo) tcd
ON tcv.TCDRecoveryKey = tcd.RecoveryKey
LEFT OUTER JOIN Expanded_Call_Variable ecv ON tcv.ExpandedCallVariableID = ecv.ExpandedCallVariableID
WHERE tcv.DateTime > @dateFrom AND tcv.DateTime < @dateTo
With variables, you can parametrize your code (for instance, you could write SET @dateFrom = ? and let the calling application fill in the datetime value in for you).
Plus joining two large tables with all rows like you did (TCD-TCV) is never a good option.
Another aspect to consider: all ECC's are actually arrays (always), so it's not good to leave out the index value (tcv.ArrayIndex).
G. -
It is Any suggestions for improving Oracle Tools GUI performance?
Does anyone have any suggestions for improving the GUI performance of Oracles Java Tools? Response to events is very sloooow i.e. click on a menu in Oracle Directory Manager wait three seconds before the menu items appear.
System Environment:
Dell Inspiron 8100
Windows XP Pro
256MB Ram
1 GHz
Oracle:
Oracle91 Enterprise Edition 9.0.1.1.1
Other:
No non Oracle Java components installed (JDKs, JREs etc.)
ThanksIf the database and the tools are just on the one box more memory is probably required. I had an nt box with 500MHz 256MB and Oracle 9i and the java tools were unusable. I upgraded to 768MB of ram and the java tools were much quicker. I use the java tools on my laptop 256MB and 800MHz and they work fine for remote databases (ie. no rdbms on the laptop).
-
Need help in improving the performance for the sql query
Thanks in advance for helping me.
I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. The data count which is updated in the target table is 2 million records and the target table has 15 million records.
Any suggestions or solutions for improving performance are appreciated
SQL query:
update targettable tt
set mnop = 'G',
where ( x,y,z ) in
select a.x, a.y,a.z
from table1 a
where (a.x, a.y,a.z) not in (
select b.x,b.y,b.z
from table2 b
where 'O' = b.defg
and mnop = 'P'
and hijkl = 'UVW';987981 wrote:
I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. And that meant what? Surely if you spend all that time and effort to try various approaches, it should mean something? Failures are as important teachers as successes. You need to learn from failures too. :-)
The data count which is updated in the target table is 2 million records and the target table has 15 million records.Tables have rows btw, not records. Database people tend to get upset when rows are called records, as records exist in files and a database is not a mere collection of records and files.
The failure to find a single faster method with the approaches you tried, points to that you do not know what the actual performance problem is. And without knowing the problem, you still went ahead, guns blazing.
The very first step in dealing with any software engineering problem, is to identify the problem. Seeing the symptoms (slow performance) is still a long way from problem identification.
Part of identifying the performance problem, is understanding the workload. Just what does the code task the database to do?
From your comments, it needs to find 2 million rows from 15 million rows. Change these rows. And then write 2 million rows back to disk.
That is not a small workload. Simple example. Let's say that the 2 million row find is 1ms/row and the 2 million row write is also 1ms/row. This means a 66 minute workload. Due to the number of rows, an increase in time/row either way, will potentially have 2 million fold impact.
So where is the performance problem? Time spend finding the 2 million rows (where other tables need to be read, indexes used, etc)? Time spend writing the 2 million rows (where triggers and indexes need to be fired and maintained)? Both? -
Need suggestion for Column update in query results
While generating reports using Oracle 10g SQL Query, we need to update the few of columns data with business calculations. We are processing large amount of data. Kindly suggest us, for the best method to achieve this.
i don't know about Oracle 10 SQL Query but i wouldn't mix reporting with data calcuations which is stored persistent in the database. I would separate them, e.g. you could create a database-job to execute your updates at a specific time each day.
hope this helps -
Newbie - need help with a SQL query for a bar chart
Hi,
I'm a new user on APEX with no real SQL knowledge and I'm trying to build a dashboard with charts into an existing APEX application. Based on another application, I have come up with the following SQL code:
select null link
, CATEGORY label
, count (decode(PROJECT_STATUS,'1',PROJECT_ID))"Active"
, count (decode(PROJECT_STATUS,'2',PROJECT_ID))"Complete"
, count (decode(PROJECT_STATUS,'3',PROJECT_ID))"On Hold"
, count (decode(PROJECT_STATUS,'4',PROJECT_ID))"Pipeline"
, count (decode(PROJECT_STATUS,'5',PROJECT_ID))"Pending Review"
from GRAPO_PROHEADTRK
where (PROJECT_STATUS='1' or PROJECT_STATUS='2' or PROJECT_STATUS='3' or PROJECT_STATUS='4' or PROJECT_STATUS='5' or PROJECT_STATUS='6')
group by CATEGORY
Order by COUNT(PROJECT_ID) DESC
The code from the other app was:
select null link
, FUNCTIONAL_AREA label
, count (decode(PROJECT_STATUS,'Active',PROJECT_ID))"Active"
, count (decode(PROJECT_STATUS,'Complete',PROJECT_ID))"Complete"
, count (decode(PROJECT_STATUS,'On Hold',PROJECT_ID)) "On Hold"
, count (decode(PROJECT_STATUS,'Recurring',PROJECT_ID))"Recurring"
, count (decode(PROJECT_STATUS,'Pipeline',PROJECT_ID))"Pipeline"
, count (decode(PROJECT_STATUS,'Not Approved',PROJECT_ID))"Not Approved"
from PM_V2
where LOB='S2S' and (FUNCTIONAL_AREA='Accounts Payable' or FUNCTIONAL_AREA='Expense' or FUNCTIONAL_AREA='Procurement' or FUNCTIONAL_AREA='Fixed Assets')
group by FUNCTIONAL_AREA
Order by COUNT(PROJECT_ID) DESC
I'm getting a "Failed to parse SQL query!" error when I try to run validation.
Is this enough info for some assistance? Any help would really be appreciated.
Thanks,
RachelHello,
This is more of an SQL question, rather than specifically APEX-related. It's notable that you say: I'm a new user on APEX with no real SQL knowledgeWhich is fine (we all have to start somewhere, afterall) but it might be worth de-coupling the problem from APEX in the first instance. I'd also strongly recommend either taking a course, reading a book (e.g. http://books.google.co.uk/books?id=r5vbGgz7TFsC&printsec=frontcover&dq=Mastering+Oracle+SQL&hl=en#v=onepage&q=Mastering%20Oracle%20SQL&f=false) or looking for a basic SQL tutorial - it will save you a whole lot of heartache, I promise you. Search the oracle forums for the terms "Basic SQL Tutorial" and you should come up with a bunch of results.
Given that you've copied your query template from another, I would suggest ensuring that the actual query works first of all. Try running it in either:
* SQL Editor
* SQL*Plus
* an IDE like SQL Developer (available free from the OTN: http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html ) or TOAD or similar.
You may find there are syntax errors associated with the query - it's difficult to tell without looking at your data model.
select null link
, CATEGORY label
, count (decode(PROJECT_STATUS,'1',PROJECT_ID))"Active"
, count (decode(PROJECT_STATUS,'2',PROJECT_ID))"Complete"
, count (decode(PROJECT_STATUS,'3',PROJECT_ID))"On Hold"
, count (decode(PROJECT_STATUS,'4',PROJECT_ID))"Pipeline"
, count (decode(PROJECT_STATUS,'5',PROJECT_ID))"Pending Review"
from GRAPO_PROHEADTRK
where (PROJECT_STATUS='1' or PROJECT_STATUS='2' or PROJECT_STATUS='3' or PROJECT_STATUS='4' or PROJECT_STATUS='5' or PROJECT_STATUS='6')
group by CATEGORYNote that your "order by" clause references a field called "PROJECT_ID", which exists in the old query but you've changed other similar references to "PROJECT_STATUS" - is it possible you've just missed this one? The perils of copy-paste coding I'm afraid... -
Need suggestions for imporving data load performance via SQL Loader
Hi,
Our requirement is to load 512 (1 GB each) files in Oracle database.
We are using SQL loaders to load files into the DB (A partitioned table) and have tried almost all the possible options that come with sql loaders (Direct load path, parallel=true, multithreading=true, unrecoverable)
As the tables is growing bigger in size, each file load time is increasing (It started with 5 minutes per file and has reached 2 hours per 3 files now and is increasing with every batch- Note we are loading 3 files concurrently on the target table using the parallel = true oprion of sql loader)
Questions 1:
My problem is that somehow multithreading is not working for us (we have multi CPU server and have enabled multithreading=true). Could it be something to do with DB setting which might be hindering the data load to be done in multiple threads?
Question 2:
Would gathering stats on the target table and it's partitions help improve load performance ? I'm not sure if stats improve DML's, they would definitely improve sql queries. Any thoughts?
Question 3:
What would be the best strategy to gather stats on this table (which would end up having 512 GB data) ?
Question 4:
Do you think insertions in a partitioned table (with growing sizes) would have poor performance as compared to a non-partitioned table ?
Any other suggestions to improve performace are most welcome !!
Thanks,
Sachin
Edited by: Sachin Tiwari on Mar 13, 2013 6:29 AM2 hours to load just 3 GB of data seems unreasonable regardless of the SQL Loader settings. It seems likely to me that the problem is not with SQL Loader but somewhere else.
Have you generated a Statspack/ AWR/ ASH report to see where all that time is being spent? Are there triggers on the table? Are there bitmap indexes?
Is your table partitioned in a way that is designed to improve the efficiency of loads so that all the data from one file goes into one partition? Or is data from each file getting inserted into many different partitions.
Justin -
How to pass a variable for a SQL query in OLEDB source?
Hi All,
I am new to SSIS and working on it past few days. Can anyone please help me getting through a scenario where I need to pass a variable in the SQL statement in OLEDB source connection. Please find below for the details.
eg:
1) I have a SQL table with the columns SerialNumber, Name, IsValid, FileName with multiple rows.
2) I have the file Name in a variable called Variable1.
3) I want to read the data from my SQL table filtering based on the FileName (Variable1) within a data flow task and pull that data to the destination table.
Question: In the data flow task, added source and destination DB connection with a script component in between to perform my validations. When trying to retrieve the data from source using the variable (i.e. SQL Query with variable), I am not able to add
the query as the SQL statement box is disabled. How to filter the data based on the variable in the source DB ?
Any help/suggestions would be of great help.
Thanks,
SriJust to add with Vaibhav comment .
SQL Command : SQL query either with SQL variable or any condition or simple Sql statement
Like ;
Select * from dimcustomer
SQL Command using Varible :
Sometimes we design our dynamic query in variable and directly use that variable name in oledb source.
If you Sql query needs a condition based on SSIS variable .
you can find a Example here :
http://www.toadworld.com/platforms/sql-server/b/weblog/archive/2013/01/17/ssis-replace-dynamic-sql-with-variables.aspx
http://www.select-sql.com/mssql/how-to-use-a-variable-inside-sql-in-ssis-data-flow-tasks.html
Thanks
Please Mark This As Answer or vote for Helpful Post if this helps you to solve your question/problem. http://techequation.com -
Hi,
I need a help in writing an SQL query . I am actually confused how to write a query. Below is the scenario.
CREATE TABLE demand_tmp
( item_id NUMBER,
org_id NUMBER,
order_line_id NUMBER,
quantity NUMBER,
order_type NUMBER
CREATE TABLE order_tmp
( item_id NUMBER,
org_id NUMBER,
order_line_id NUMBER,
open_flag VARCHAR2(10)
INSERT INTO demand_tmp
SELECT 12438,82,821,100,30 FROM dual;
INSERT INTO demand_tmp
SELECT 12438,82,849,350,30 FROM dual;
INSERT INTO demand_tmp
SELECT 12438,82,NULL,150,29 FROM dual;
INSERT INTO demand_tmp
SELECT 12438,82,0,50,-1 FROM dual;
INSERT INTO order_tmp
SELECT 12438,82,821,'Y' FROM dual;
INSERT INTO order_tmp
SELECT 12438,82,849,'N' FROM dual;
Demand_tmp:
Item_id org_id order_line_id quantity order_type
12438 82 821 100 30
12438 82 849 350 30
12438 82 NULL 150 29
12438 82 0 50 -1
Order_tmp :
Item_id org_id order_line_id open_flag
12438 82 821 Y
12438 82 849 N I need to fetch the records from demand_tmp table whose order_line_id is present in order_tmp and having open_flag as 'Y' or if order_type in demand_tmp table is 29.
The below query will give the records whose order line id is present in order_tmp. But, If i need records which are having order_type=29 the below query wont return any records as order_line_id is NULL. If I place outer join I will get other records also (In this example order_type -1 records) . Please help me how can we write a query for this. Expected o/p is below.
Query :
Select item_id,org_id,order_line_id,quantity,order_type,open_flag
from demand_tmp dt , order_tmp ot
where dt.order_line_id = ot.order_line_id
AND dt.item_id=ot.item_id
AND dt.org_id = ot.org_id
AND ot.open_flag = 'Y';
Expected Output :
item_id org_id order_line_id quantity order_type open_flag
12438 82 821 100 30 Y
12438 82 NULL 150 29 NULL Thanks in advance,
Rakesh
Edited by: Venkat Rakesh on Oct 7, 2012 6:32 PM
Edited by: Venkat Rakesh on Oct 7, 2012 8:39 PMHi Rakesh,
the query is not working as you would like ( but IS working as expected ) since your trying to compare null to another value.
Comparing null always results in FALSE, also if you compare null to null. This is because null means undefined.
select 1 from dual where null=null results in no data found.
I would suggest using a non natural key to join the tables.
For example include a column ID in the master table which is filled with a sequence and include that field as a foreign key in the detail table.
This way you can easily join master and detail on ID = ID, and you don't have to worry about null values in this column since it's always filled with data.
Regards,
Bas
btw, using the INNER JOIN and OUTER JOIN syntax in your SQL makes it better readable, since you're separating join conditions from the where clause, just a tip ;)
Maybe you are looking for
-
DATETIME To Format mm/dd/yyyy hh:mm am/pm
I have a column in a database set as a DATETIME datatype, when I select it, I want to return it as: mm/dd/yyyy hh:mm am or pm. How in the world can I do this? I looked at the function CONVERT() and it doesnt seem to have this format as a valid type.
-
My itunes got wiped out. I have re-installed it but
the music files were on a external, but I lost all playlists, playcount data, ratinngs etc. All of this data is still on my ipod. Is there any way to upload it to the new itunes library?
-
Chrome missing in Import data from another browser
I use both chrome and firefox portable from site portableapps because mozilla doesn´t have a portable version. My dilemma today is this: I always had on firefox on menu "import and backup" > > "import data from another browser" listing chrome like th
-
How to execute SAPGUI script from the command line?
I have created a script and would like to execute it from the command line. There must be some flag that goes with SAPGUI.EXE. Any help appreciated. Thanks, Jeff
-
How can I show metadata on a slideshow?
I'm doing a presentation and want to use Aperture's slideshow. I'd like to show the EXIF data at the bottom of each image, but can't seem to find a way to do it. I'm using a 17' MacBook Pro running OSX 10.6.8 and Aperture version 3.1.2. Any ideas?