Select performance from all_tables vs user_tables (same for columns)
Hi, I need to run two queries, one is to select either all tables of a specific owner, the other query is to select all columns of that owner.
Sometimes the owner is the user I connected as, and sometimes its different.
I was wondering what the performance implication is of running:
select ... from all_tables where owner = 'my owner';
vs
select ... from user_tables;
I realize I can only do this if 'my owner' is the connected user, but if there is a performance difference, I'd rather put the 'if/else' code in there to get faster results.
Same question for all_tab_columns vs. user_tab_columns.
I ran a test locally here (development shop), and don't see performance difference, but I am not sure how it would work out in production database.
If version of Oracle matters, I'm interested in Oracle 10g and above. (nothing below 10g)
Edited by: user5947840 on Aug 25, 2011 6:30 AM
Not really of any consequence so far as you are concerned. ALL_ and USER_ are both the same catalog queries underneath - the difference being that USER_ filters on the current user, but ALL_ filters on the privileges of the current user.
Similar Messages
-
Using selected output from a RFC as input for another RFC
Hi,
I'm new at this so I may be doing things completely wrong.
I have two models based on adaptive RFC.
The first populates a list of Org Units. (Orgeh_Out)
The user then selectes multiple Org Units from this list.
I want to use the selected Org Units as Input for the second RFC which will display the personnel numbers in the selected Org Units. (Orgeh_Tab_In)
Model 1 works fine but I am having difficulty in populating the Org Unit input table for the second model.
public void RFCPernrFill( )
//@@begin RFCPernrFill()
Z_Wd_Pernr_Input pernrInput = new Z_Wd_Pernr_Input();
wdContext.nodePernrList().bind(pernrInput);
int orgCount = wdContext.nodeOrgeh_Out().size();
for (int i=0; i<orgCount; i++) {
if (wdContext.nodeOrgeh_Out().isMultiSelected(i)) {
IOrgeh_OutElement thisOrgUnit = wdContext.nodeOrgeh_Out().getOrgeh_OutElementAt(i);
Zwd_Orgeh tmpOrgTab = new Zwd_Orgeh();
String st = String.valueOf(thisOrgUnit);
tmpOrgTab.setOrgeh(st); //<-- Causes error but will only accept a String
pernrInput.addOrgeh_Tab_In(tmpOrgTab);
I don't understand why tmpOrgTab.setOrgeh will only accept a String and not the 'thisOrgUnit' variable.
i.e. why couldn't I say tmpOrgTab.setOrgeh(thisOrgUnit) ?
The command tmpOrgTab.setOrgeh(st); causes an error however if I hard code a Org Unit tmpOrgTab.setOrgeh("5000011"); it works. The error I get is :
Type conversion error, field ORGEH, complex type class com.sap.com.testing.pernrlist.model1.Zwd_OrgehHi
Can you give me the structure of the proxy classes generated. I will give you the working code :).
The lines of code
Zwd_Orgeh tmpOrgTab = new Zwd_Orgeh();
String st = String.valueOf(thisOrgUnit);
tmpOrgTab.setOrgeh(st); //<-- Causes error but will only accept a String
pernrInput.addOrgeh_Tab_In(tmpOrgTab);
There is a structure in your RFC called ZWD_ORGEH.
Instantiate this like you have done.
Zwd_Orgeh tmpOrgTab = new Zwd_Orgeh();
Now the following line of code
tmpOrgTab.setOrgeh(st)
Here you say it is only accepting a string. But what i feel is that you would have another internal structure and u need to instantiate that.
Anyway if you just send me the proxy classes generated i will be able to help you in your code.
Also give me the structure of the RFC.
Meanwhile check this link
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/webas/webdynpro/effective web dynpro - adaptive rfc models
The above link will also give you some input to your prob.
regards
Ravi -
Exporting selected contacts from Outlook 2011 for Mac
I've wanted to easily export a set of selected contacts from Outlook in Microsoft Office for Mac 2011. I've been through many threads about synching to Address Book and then exporting, but I've found a host of troubles, including duplicate copies of contacts being created.
So, I finally broke down and wrote an AppleScript script to export all of the currently selected contacts from Outlook to a file in either vcf (vcard) or csv (comma separated value) format. The best use of this script is to:
-- Install this as a script in Microsoft Outlook by saving the script below to Documents>Microsoft User Data>Outlook Script Menu Items
-- Change to your Contacts in Outlook. Use the Outlook search bar to find the contacts you want to export. You might search by name, category, company, or anything else that identifies the contacts you want to export. Or, you might just leave the view showing all contacts.
-- Select the contacts you want to export
-- Launch the script
The script will have you select between vcard and csv and select a destination file. This hasn't been optimized for speed, so if you're exporting 100's or 1,000's of contacts, be patient. And there isn't a progress bar at present, so you have to wait. It will display an alert when it's complete.
Sorry not to have a download location for you. You'll just have to copy the script text :-). Keep in mind there's been some but limited testing. Read the comments for details. And enjoy.
-- jsc
-- Export Outlook Contacts
-- (c) 2012 J. Scott Carr. The script is made available for free use, with no
-- warranty, under the Creative Commons license agreement.
-- This script has only been tested on Mac OS X 10.6.8 with Micrsoft Outlook for
-- Mac 2011 version 14.1.4.
property byCategory : "By category"
property byPattern : "Names matching pattern"
property vcardFormat : "VCard"
property csvFormat : "CSV"
-- main
set contactsToExport to {}
-- Get the contact selection
set contactsToExport to get_contacts_to_export()
if (count of contactsToExport) is 0 then
display alert "Please select contacts to export and rerun script" as warning
return
end if
-- Shall we export to vcard or CSV?
set theFormat to vcard_or_csv()
if theFormat is "" then
display alert "Error: Must select VCard or CSV format" as warning
return
end if
-- Get and open the output file
set oFile to open_output_file(theFormat)
if (oFile is equal to -128) then
display alert "Canceled"
return
else if (oFile < 0) then
display alert "File open failed (" & oFile & ")" as warning
return
end if
-- Export the contacts
display dialog "About to export " & (count of contactsToExport) & " contacts in " & theFormat & " format. Proceed?"
if button returned of result is not "OK" then
try
close access oFile
end try
return
end if
if theFormat is vcardFormat then
export_to_vcard(contactsToExport, oFile)
else if theFormat is csvFormat then
export_to_csv(contactsToExport, oFile)
else
display alert "Invalid format" as warning
end if
close access oFile
display alert "Complete"
return
-- get_contacts_to_export()
-- We're going to export the Contacts currently selected in Outlook.
-- Check that the current selection is Contacts and not some other Outlook
-- object. Snag the selected Contacts and return them as a list.
-- A side note. When I started this, I built options to enter a matching
-- name string or select a category. And then it hit me that those features
-- are much more robust in Outlook, and it would be easy to just use the
-- current selection.
-- There is some strange behavior that Outlook needs to have recently been
-- the front, active window.
on get_contacts_to_export()
set selectedContacts to {}
tell application "Microsoft Outlook"
set theSelection to selection
if class of theSelection is list then
if class of the first item of theSelection is contact then
copy theSelection to selectedContacts
end if
else
if class of theSelection is contact then
copy theSelection to selectedContacts
end if
end if
return selectedContacts
end tell
end get_contacts_to_export
-- vcard_or_csv()
-- Get the format to use when exporting contacts
on vcard_or_csv()
choose from list {vcardFormat, csvFormat} with prompt "Select export file format:"
if result is false then
return ""
else
return first item of result
end if
end vcard_or_csv
-- open_output_file()
-- Open the destination file for the export, returning the file descriptor or the error number
-- if the operation fails
on open_output_file(exportType)
-- Get the filename, letting "choose file name" deal with existing files.
set theDate to current date
set theTime to time of theDate
if exportType is csvFormat then
set fileName to "contacts.csv"
else
set fileName to "contacts.vcf"
end if
try
set outputFile to choose file name with prompt "Select export destination file" default name fileName
on error errText number errNum
return errNum
end try
-- Open the file
try
-- Open the file as writable and overwrite contents
set oFile to open for access outputFile with write permission
set eof oFile to 0
on error errText number errNum
display alert "Error opening file: " & errNum & return & errText as warning
try
close access oFile
end try
return errNum
end try
return oFile
end open_output_file
-- export_to_vcard()
-- Export each of theContacts to the open file outFile as a set of vcards. Note that the
-- vcard data is from the "vcard data" property of the theContacts. This routine
-- doesn't attempt to reformat an Outlook vcard, nor limit the fields included
-- in the vcard.
on export_to_vcard(theContacts, outFile)
set vcards to {}
tell application "Microsoft Outlook"
repeat with aContact in theContacts
copy vcard data of aContact to the end of vcards
end repeat
end tell
repeat with aCard in vcards
write (aCard & linefeed) to outFile
end repeat
end export_to_vcard
-- export_to_csv()
-- Export each of theContacts to the open file outFile in csv format
on export_to_csv(theContacts, outFile)
set csvFields to {}
-- Get the fields of the contact to export
set csvFields to init_csv()
-- Write the header row
set nFields to count csvFields
write first item of csvFields to outFile
repeat with i from 2 to nFields
write "," & item i of csvFields to outFile
end repeat
write linefeed to outFile
-- Export the fields of the contacts in CSV format, one per line
repeat with aContact in theContacts
write build_csv_line(csvFields, aContact) & linefeed to outFile
end repeat
end export_to_csv
-- init_csv(): defines the fields to export when csv format is selected
-- Each of the fields in the list must match a name used in the routine build_csv_line().
-- The idea is to later create a a pick list so the user can select which contact properties
-- to export.
on init_csv()
set csvFields to {"first name", "last name", "middle name", "title", "nickname", "suffix", "phone", "home phone number", "other home phone number", "home fax number", "business phone number", "other business phone number", "busines fax number", "pager number", "mobile number", "home email", "work email", "other email", "company", "job title", "department", "assistant phone number", "home street address", "home city", "home state", "home country", "home zip", "business street address", "business city", "business state", "business country", "business zip", "home web page", "business web page", "note"}
end init_csv
-- build_csv_line(): format one line for the csv file
-- Parameter csvFields determins which fields to include in the export.
-- Unfortunately I've not figured out how to use perl-style generation of
-- indirect references. If I could, this would have been much more elegant
-- by simply using the field name to refer to a Contact properly.
-- Note that email address are a special case as they're a list of objects in
-- Outlook. So these are handled specially in the export function and can only
-- be selected by the column names "home email", "work email", and "other email".
-- Outlook allows a contact to have more than one of each type of email address
-- but not all contact managers are the same. This script takes the first of
-- each type. So if a contact has more than one "home" email address, you will
-- only be able to export the first to a csv file. Suggest you clean up your
-- addresses in Outlook to adapt. The alternative is to support multiple
-- columns in the csv like "other email 1" and "other email 2", but that's not
-- supported in this version.
-- Another note. In this version, any embedded "return" or "linefeed" characters
-- found in a property of a contact are converted to a space. That means that
-- notes, in particular, will be reformated. That said, this gets arond a problem
-- with embedded carriage returns in address fields that throw off importing
-- the csv file.
-- Also note that at this time IM addresses aren't supported, but it's an easy add
-- following the same logic as email addresses.
on build_csv_line(csvFields, theContact)
set aField to ""
set csvLine to ""
set homeEmail to ""
set workEmail to ""
set otherEmail to ""
tell application "Microsoft Outlook"
set props to get properties of theContact
-- Extract email addresses from address list of contact
set emailAddresses to email addresses of props
repeat with anAddress in emailAddresses
if type of anAddress is home then
set homeEmail to address of anAddress
else if type of anAddress is work then
set workEmail to address of anAddress
else if type of anAddress is other then
set otherEmail to address of anAddress
end if
end repeat
-- Export each desired fields of the contact
repeat with aFieldItem in csvFields
set aField to aFieldItem as text
set aValue to ""
if aField is "first name" then
set aValue to get first name of props
else if aField is "last name" then
set aValue to last name of props
else if aField is "middle name" then
set aValue to middle name of props
else if aField is "display name" then
set aValue to display name of props
else if aField is "title" then
set aValue to title of props
else if aField is "nickname" then
set aValue to nickname of props
else if aField is "suffix" then
set aValue to suffix of props
else if aField is "phone" then
set aValue to phone of props
else if aField is "home phone number" then
set aValue to home phone number of props
else if aField is "other home phone number" then
set aValue to other home phone number of props
else if aField is "home fax number" then
set aValue to home fax number of props
else if aField is "business phone number" then
set aValue to business phone number of props
else if aField is "other bsiness phone number" then
set aValue to other business phone number of props
else if aField is "bsuiness fax number" then
set aValue to business fax number of props
else if aField is "pager number" then
set aValue to pager number of props
else if aField is "mobile number" then
set aValue to mobile number of props
else if aField is "home email" then
set aValue to homeEmail
else if aField is "work email" then
set aValue to workEmail
else if aField is "other email" then
set aValue to otherEmail
else if aField is "office" then
set aValue to office of props
else if aField is "company" then
set aValue to company of props
else if aField is "job title" then
set aValue to job title of props
else if aField is "department" then
set aValue to department of props
else if aField is "assistant phone number" then
set aValue to assistant phone number of props
else if aField is "age" then
set aValue to age of props
else if aField is "anniversary" then
set aValue to anniversary of props
else if aField is "astrololgy sign" then
set aValue to astrology sign of props
else if aField is "birthday" then
set aValue to birthday of props
else if aField is "blood type" then
set aValue to blood type of props
else if aField is "desription" then
set aValue to description of props
else if aField is "home street address" then
set aValue to home street address of props
else if aField is "home city" then
set aValue to home city of props
else if aField is "home state" then
set aValue to home state of props
else if aField is "home country" then
set aValue to home country of props
else if aField is "home zip" then
set aValue to home zip of props
else if aField is "home web page" then
set aValue to home web page of props
else if aField is "business web page" then
set aValue to business web page of props
else if aField is "spouse" then
set aValue to spouse of props
else if aField is "interests" then
set aValue to interests of props
else if aField is "custom field one" then
set aValue to custom field one of props
else if aField is "custom field two" then
set aValue to custom field two of props
else if aField is "custom field three" then
set aValue to custom field three of props
else if aField is "custom field four" then
set aValue to custom field four of props
else if aField is "custom field five" then
set aValue to custom field five of props
else if aField is "custom field six" then
set aValue to custom field six of props
else if aField is "custom field seven" then
set aValue to custom field seven of props
else if aField is "custom field eight" then
set aValue to custom field eight of props
else if aField is "custom phone 1" then
set aValue to custom phone 1 of props
else if aField is "custom phone 2" then
set aValue to custom phone 2 of props
else if aField is "custom phone 3" then
set aValue to custom phone 3 of props
else if aField is "custom phone 4" then
set aValue to custom phone 4 of props
else if aField is "custom date field one" then
set aValue to custom date field one of props
else if aField is "custom date field two" then
set aValue to custom date field two of props
else if aField is "note" then
set aValue to plain text note of props
end if
if aValue is not false then
if length of csvLine > 0 then
set csvLine to csvLine & ","
end if
if (aValue as text) is not "missing value" then
set csvLine to csvLine & "\"" & aValue & "\""
end if
end if
end repeat
end tell
-- Change all embeded "new lines" to spaces. Does mess with the formatting
-- of notes on contacts, but it makes it cleans the file for more reliable
-- importing. This could be changed to an option later.
set csvLine to replace_text(csvLine, return, " ")
set csvLine to replace_text(csvLine, linefeed, " ")
return csvLine
end build_csv_line
-- replace_text()
-- Replace all occurances of searchString with replaceString in sourceStr
on replace_text(sourceStr, searchString, replaceString)
set searchStr to (searchString as text)
set replaceStr to (replaceString as text)
set sourceStr to (sourceStr as text)
set saveDelims to AppleScript's text item delimiters
set AppleScript's text item delimiters to (searchString)
set theList to (every text item of sourceStr)
set AppleScript's text item delimiters to (replaceString)
set theString to theList as string
set AppleScript's text item delimiters to saveDelims
return theString
end replace_textThank You, but this is a gong show. Why is something that is so important to us all so very, very difficult to do?
-
How to capture selected value from drop down by index
Dear friends,
i want to capture the value of select value from drop down by index, for eg if select air france, how to capture , could any one please let me know
Thanks
VijayaHi Vijaya,
You can get the value of selected from drop down as below
Check out the event handler method attached to Onselect event of the ui element drop down by index , if no event is associated, then create an event and attach to the drop down list
Now you will be having the CONTEXT_ELEMENT in the WDEVENT parameter
data lo_element type ref to if_wd_context_element.
lo_element = wdevent->get_context_element( name = 'CONTEXT_ELEMENT').
Now, you can get the static attribute value of selected drop down value & let us say your drop down list values are populated from context node 'ND_DRP_DOWN'
data ls_data type wd_this->element_nd_drp_down.
lo_element->get_static_attributes(
importing
static_attributes = ls_data ).
Hope this helps you.
Regards,
Rama -
HT5824 Can I use only one (selected) folder from my documents in iCloud?
Can I use only one (selected) folder from my documents in iCloud for MAC and PC?
If the folders were created in the Photos app on the iPad, they don't really contain copies of the photos. They contain pointers to those photos that allow them to appear in the albums that you create. Consequently, they cannot be imported to the computer. Those albums are for local organization on the iPad only And cannot be imported.
You should be able to select the indicidual phots that you want to import, as far as I know. I can do it on a Mac using iPhoto or Image Capture, so I assume that Windows will allow you to pick and choose which photos you want to import.
Import photos and videos from your iPhone, iPad, or iPod touch to your Mac or Windows PC - Apple Support -
Rs.updateBoolean SQLException: ORA-12899: value too large for column
Complete error is SQLException: ORA-12899: value too large for column "SMSUSER"."PRUEBA"."VLOGIC" (actual: 4, maximum: 1)
Let's see the code:
PreparedStatement ps=null;
ResultSet rs=null;
try
ps=conn.prepareStatement("create table prueba(name varchar2(32),vlogic char(1) not null check(vlogic in (0,1)))");
ps.execute();
logger.info("Table created.");
ps=conn.prepareStatement("insert into prueba (name,vlogic) values ('user01',?)");
ps.setBoolean(1,true);
ps.executeUpdate();
logger.info("Data Inserted.");
ps=conn.prepareStatement("update prueba set vlogic=? where name=?");
ps.setBoolean(1,false);
ps.setString(2,"user01");
ps.executeUpdate();
logger.info("Data Updated.");
; Till here all runs ok, but if we try to modify vía Resulset.....
ps=conn.prepareStatement("select vlogic from prueba where name=? for update", ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATABLE, ResultSet.CLOSE_CURSORS_AT_COMMIT);
ps.setString(1,"user01");
rs=ps.executeQuery();
if (rs.next())
logger.info("Got record.");
rs.updateBoolean("vlogic",true);
rs.updateRow();
logger.info("Column updated.");
catch (SQLException E)
logger.info("SQLException: "+E.getMessage());
finally
closeResultSet(rs);
closePreparedStatement(ps);
The trouble is that when updating via resultset, what is going to be inserted is "true" or "false" and not "0" or "1" as with insertions and modifications via preparedStatements.
So systems returns error: SQLException: ORA-12899: value too large for column "SMSUSER"."PRUEBA"."VLOGIC" (actual: 4, maximum: 1)
Cause it is tryng to insert "true".
Can somebody tell me what's happenign here?
Thanks in advance.
Francisco Javier Ascanio Suárez.
E-mail: [email protected]Ok, but why is this behaviour different in ResultSet statements than in Prepared Statements?
As you can see in my example, prepared statements with set boolean runs ok.
I like your "proper way", and it resolves my trouble, but it don't tells me why do I have to program a field update in different ways depending of Prepared Statements or updating resultsets.
Thanks in advance. -
How to select data from an aggregate in a customer exit for a query?
Hi,
I have written a virtual key figure customer exit for a query. Earlier the selection was from the cube, where there was severe performance issue. So I have created an aggregate, activated and have loaded the data.
Now when I select that data I find that the Key table is different in development and production. How do I resolve this.
My code is attached below. The table in developemnt is KEY_100027 and in production is KEY_100004. This code is activated and running in BW development server.
SELECT
F~KEY_1000041 AS K____035
F~KEY_1000271 AS K____035
F~QUANT_B AS K____051
F~VALUE_LC AS K____052
INTO (xdoc_date, xval1, xqty1)
UP TO 1 ROWS
FROM
FROM
*/BIC/E100004 AS F JOIN
/BIC/E100027 AS F JOIN
/BIC/DZMM_CGRNU AS DU
ON FKEY_ZMM_CGRNU = DUDIMID
JOIN /BI0/SUNIT AS S1
ON DUSID_0BASE_UOM = S1SID
JOIN /BI0/SCURRENCY AS S2
ON DUSID_0LOC_CURRCY = S2SID
JOIN /BI0/SMATERIAL AS S3
*ON FKEY_1000042 = S3SID
ON FKEY_1000272 = S3SID
JOIN /BI0/SMOVETYPE AS S4
*ON FKEY_1000043 = S4SID
ON FKEY_1000273 = S4SID
JOIN /BI0/SPLANT AS S5
*ON FKEY_1000044 = S5SID
ON FKEY_1000274 = S5SID
JOIN /BIC/D100004P AS DP
*ON FKEY_100004P = DPDIMID
ON FKEY_100027P = DPDIMID
WHERE
WHERE
( ( ( ( F~KEY_1000041 BETWEEN 20051230 AND 20060630 ) ) AND ( (
( ( ( ( F~KEY_1000271 BETWEEN 20051230 AND 20060630 ) ) AND ( (
S3~MATERIAL = <l_0material> ) ) AND ( (
s2~movetype BETWEEN '101' AND '102' OR
s4~movetype BETWEEN '921' AND '922' OR
s4~movetype BETWEEN '105' AND '106' OR
s4~movetype BETWEEN '701' AND '701' OR
s4~movetype BETWEEN '632' AND '632' ) ) AND ( (
S5~PLANT = <l_0plant> ) ) AND ( (
DP~SID_0RECORDTP = 0 ) ) ) )
GROUP BY
ORDER BY F~KEY_1000271 DESCENDING.
IF sy-subrc NE 0.
EXIT.
ENDIF.
ENDSELECT.
How do I transport the code and make it work?
Whats the reason that the two key fields are different.
I had transported the aggregate from development to production. Activated it and filled the data.
What is the way out? Please help.
Regards,
Annie.Hi Sonu,
The main task is to move the contents of the one internal table to another with some condition.
First sort and delete the duplicate entries from the First Internal table like below :
sort it_tab by material ascending date_modified descending.
delete adjacent duplicates from it_tab.
Then move that Internal table contents to another internal table.
Define another internal table with the same structure as you have first internal table and then
Second Step :
it_itab1 = it_itab.
If you are using seperate Header line and Body then you can do like below :
it_itab1[] = it_itab[].
This will fix the issue.
Please let me know if you need any further explonation.
Regards,
Kittu
Edited by: Kittu on Apr 24, 2009 12:21 PM -
Performance problem in select data from data base
hello all,
could you please suggest me which select statement is good for fetch data form data base if data base contain more than 10 lac records.
i am using SELECT PACKAGE SIZE n statement, but it's taking lot of time .
with best regards
srinivas rathodHi Srinivas,
if you have huge data and selecting ,you could decrease little bit time if you use better techniques.
I do not think SELECT PACKAGE SIZE will give good performance
see the below examples :
ABAP Code Samples for Simple Performance Tuning Techniques
1. Query including select and sorting functionality
tables: mara, mast.
data: begin of itab_new occurs 0,
matnr like mara-matnr,
ernam like mara-ernam,
mtart like mara-mtart,
matkl like mara-matkl,
werks like mast-werks,
aenam like mast-aenam,
stlal like mast-stlal,
end of itab_new.
select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
into table itab_new from mara as f inner join mast as g on
fmatnr = gmatnr where gstlal = '01' order by fernam.
Code B
tables: mara, mast.
data: begin of itab_new occurs 0,
matnr like mara-matnr,
ernam like mara-ernam,
mtart like mara-mtart,
matkl like mara-matkl,
werks like mast-werks,
aenam like mast-aenam,
stlal like mast-stlal,
end of itab_new.
select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
into table itab_new from mara as f inner join mast as g on f~matnr =
gmatnr where gstlal = '01'.
sort itab_new by ernam.
Both the above codes essentially do the same function, but the execution time for code B is considerably lesser than that of Code A. Reason: The Order by clause associated with a select statement increases the execution time of the statement, so it is profitable to sort the internal table once after selecting the data.
2. Performance Improvement Due to Identical Statements Execution Plan
Consider the below queries and their levels of efficiencies is saving the execution
tables: mara, mast.
data: begin of itab_new occurs 0,
matnr like mara-matnr,
ernam like mara-ernam,
mtart like mara-mtart,
matkl like mara-matkl,
werks like mast-werks,
aenam like mast-aenam,
stlal like mast-stlal,
end of itab_new.
select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
into table itab_new from mara as f inner join mast as g on f~matnr =
gmatnr where gstlal = '01' .
sort itab_new.
select fmatnr fernam
fmtart fmatkl gwerks gaenam g~stlal
into table itab_new from mara as
f inner join mast as g on f~matnr =
gmatnr where gstlal
= '01' .
Code D (Identical Select Statements)
tables: mara, mast.
data: begin of itab_new occurs 0,
matnr like mara-matnr,
ernam like mara-ernam,
mtart like mara-mtart,
matkl like mara-matkl,
werks like mast-werks,
aenam like mast-aenam,
stlal like mast-stlal,
end of itab_new.
select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
into table itab_new from mara as f inner join mast as g on f~matnr =
gmatnr where gstlal = '01' .
sort itab_new.
select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
into table itab_new from mara as f inner join mast as g on f~matnr =
gmatnr where gstlal = '01' .
Both the above codes essentially do the same function, but the execution time for code B is considerably lesser than that of Code A. Reason: Each SQL statement during the process of execution is converted into a series of database operation phases. In the second phase of conversion (Prepare phase) an execution plan is determined for the current SQL statement and it is stored, if in the program any identical select statement is used, then the same execution plan is reused to save time. So retain the structure of the select statement as the same when it is used more than once in the program.
3. Reducing Parse Time Using Aliasing
A statement which does not have a cached execution plan should be parsed before execution; this parsing phase is a highly time and resource consuming, so parsing time for any sql query must include an alias name in it for the following reason.
1. Providing the alias name will enable the query engine to resolve the tables to which the specified fields belong to.
2. Providing a short alias name, (a single character alias name) is more efficient that providing a big alias name.
Code E
select jmatnr jernam jmtart jmatkl
gwerks gaenam g~stlal into table itab_new from mara as
j inner join mast as g on jmatnr = gmatnr where
g~stlal = '01' .
In the above code the alias name used is j .
4. Performance Tuning Using Order by Clause
If in a SQL query you are going to read a particular database record based on some key values mentioned in the select statement, then the read query can be very well optimized by ordering the fields in the same order in which we are going to read them in the read query.
Code F
tables: mara, mast.
data: begin of itab_new occurs 0,
matnr like mara-matnr,
ernam like mara-ernam,
mtart like mara-mtart,
matkl like mara-matkl,
end of itab_new.
select MATNR ERNAM MTART MATKL from mara into table itab_new where
MTART = 'HAWA' ORDER BY MATNR ERNAM MTART MATKL.
read table itab_new with key MATNR = 'PAINT1' ERNAM = 'RAMANUM'
MTART = 'HAWA' MATKL = 'OFFICE'.
Code G
tables: mara, mast.
data: begin of itab_new occurs 0,
matnr like mara-matnr,
ernam like mara-ernam,
mtart like mara-mtart,
matkl like mara-matkl,
end of itab_new.
select MATNR ERNAM MTART MATKL from mara into table itab_new where
MTART = 'HAWA' ORDER BY ERNAM MATKL MATNR MTART.
read table itab_new with key MATNR = 'PAINT1' ERNAM = 'RAMANUM'
MTART = 'HAWA' MATKL = 'OFFICE'.
In the above code F, the read statement following the select statement is having the order of the keys as MATNR, ERNAM, MTART, MATKL. So it is less time intensive if the internal table is ordered in the same order as that of the keys in the read statement.
5. Performance Tuning Using Binary Search
A very simple but useful method of fine tuning performance of a read statement is using Binary search addition to it. If the internal table consists of more than 20 entries then the traditional linear search method proves to be more time intensive.
Code H
select * from mara into corresponding fields of table intab.
sort intab.
read table intab with key matnr = '11530' binary search.
Code I
select * from mara into corresponding fields of table intab.
sort intab.
read table intab with key matnr = '11530'.
Thanks
Seshu -
Select data from database tables with high performance
hi all,
how to select data from different database tables with high performance.
im using for all entries instead of inner joins, even though burden on data base tables is going very high ( 90 % in se30)
hw to increase the performance.
kindly, reply.
thnksAlso Check you are not using open sql much like distict order by group by , use abap techniques on internal table to acive the same.
also Dont use select endselect.
if possible use up to n rows claus....
taht will limit the data base hits.
also dont run select in siode any loops.
i guess these are some of the trics oyu can use to avoid frequent DATA BASE HITS AND ABVOID THE DATA BASE LAOD. -
Select from all_tables inside a procedure brings a different result set
Hi all,
There are two cases.. both are the same, but gives different out put.
CASE 1 pulls few for SYS and all for current user.
CASE 2 Pulls data for ALL USERS.
Why the same code in side a procedure brings a different result ?
SQL>
--CASE 1
CREATE OR REPLACE procedure test_t_owner as
cursor cur1 is select table_name,owner from all_Tables;
begin
for rec1 in cur1
loop
dbms_output.put_line(rec1.table_name||'--'||rec1.owner);
end loop;
end;
set serveroutpu on;
exec test_t_owner
--CASE 2
Declare
cursor cur1 is select table_name,owner from all_Tables;
begin
for rec1 in cur1
loop
dbms_output.put_line(rec1.table_name||'--'||rec1.owner);
end loop;
end;
- regards
ski create one new user
create user test identified by test;
grant connect, select any table to test;
grant dba to test;according to you then i run the procedure test_t_owner it should be give error
and i case of AB it should return all rows from all_tables.
But give output same in but sames.
Regards
Singh -
Team , Thanks for looking into this ..
As a last resort on optimizing my stored procedure ( Below ) i wanted to create a Selective XML index ( Normal XML indexes doesn't seem to be improving performance as needed ) but i keep getting this error within my stored proc . Selective XML
Index feature is not supported for the current database version.. How ever
EXECUTE sys.sp_db_selective_xml_index; return 1 , stating Selective XML Indexes are enabled on my current database .
Is there ANY alternative way i can optimize below stored proc ?
Thanks in advance for your response(s) !
/****** Object: StoredProcedure [dbo].[MN_Process_DDLSchema_Changes] Script Date: 3/11/2015 3:10:42 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- EXEC [dbo].[MN_Process_DDLSchema_Changes]
ALTER PROCEDURE [dbo].[MN_Process_DDLSchema_Changes]
AS
BEGIN
SET NOCOUNT ON --Does'nt have impact ( May be this wont on SQL Server Extended events session's being created on Server(s) , DB's )
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
select getdate() as getdate_0
DECLARE @XML XML , @Prev_Insertion_time DATETIME
-- Staging Previous Load time for filtering purpose ( Performance optimize while on insert )
SET @Prev_Insertion_time = (SELECT MAX(EE_Time_Stamp) FROM dbo.MN_DDLSchema_Changes_log ) -- Perf Optimize
-- PRINT '1'
CREATE TABLE #Temp
EventName VARCHAR(100),
Time_Stamp_EE DATETIME,
ObjectName VARCHAR(100),
ObjectType VARCHAR(100),
DbName VARCHAR(100),
ddl_Phase VARCHAR(50),
ClientAppName VARCHAR(2000),
ClientHostName VARCHAR(100),
server_instance_name VARCHAR(100),
ServerPrincipalName VARCHAR(100),
nt_username varchar(100),
SqlText NVARCHAR(MAX)
CREATE TABLE #XML_Hold
ID INT NOT NULL IDENTITY(1,1) PRIMARY KEY , -- PK necessity for Indexing on XML Col
BufferXml XML
select getdate() as getdate_01
INSERT INTO #XML_Hold (BufferXml)
SELECT
CAST(target_data AS XML) AS BufferXml -- Buffer Storage from SQL Extended Event(s) , Looks like there is a limitation with xml size ?? Need to re-search .
FROM sys.dm_xe_session_targets xet
INNER JOIN sys.dm_xe_sessions xes
ON xes.address = xet.event_session_address
WHERE xes.name = 'Capture DDL Schema Changes' --Ryelugu : 03/05/2015 Session being created withing SQL Server Extended Events
--RETURN
--SELECT * FROM #XML_Hold
select getdate() as getdate_1
-- 03/10/2015 RYelugu : Error while creating XML Index : Selective XML Index feature is not supported for the current database version
CREATE SELECTIVE XML INDEX SXI_TimeStamp ON #XML_Hold(BufferXml)
FOR
PathTimeStamp ='/RingBufferTarget/event/timestamp' AS XQUERY 'node()'
--RETURN
--CREATE PRIMARY XML INDEX [IX_XML_Hold] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index
--SELECT GETDATE() AS GETDATE_2
-- RYelugu 03/10/2015 -Creating secondary XML index doesnt make significant improvement at Query Optimizer , Instead creation takes more time , Only primary should be good here
--CREATE XML INDEX [IX_XML_Hold_values] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index , --There should exists a Primary for a secondary creation
--USING XML INDEX [IX_XML_Hold]
---- FOR VALUE
-- --FOR PROPERTY
-- FOR PATH
--SELECT GETDATE() AS GETDATE_3
--PRINT '2'
-- RETURN
SELECT GETDATE() GETDATE_3
INSERT INTO #Temp
EventName ,
Time_Stamp_EE ,
ObjectName ,
ObjectType,
DbName ,
ddl_Phase ,
ClientAppName ,
ClientHostName,
server_instance_name,
nt_username,
ServerPrincipalName ,
SqlText
SELECT
p.q.value('@name[1]','varchar(100)') AS eventname,
p.q.value('@timestamp[1]','datetime') AS timestampvalue,
p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') AS objectname,
p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') AS ObjectType,
p.q.value('(./action[@name="database_name"]/value)[1]','varchar(100)') AS databasename,
p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') AS ddl_phase,
p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') AS clientappname,
p.q.value('(./action[@name="client_hostname"]/value)[1]','varchar(100)') AS clienthostname,
p.q.value('(./action[@name="server_instance_name"]/value)[1]','varchar(100)') AS server_instance_name,
p.q.value('(./action[@name="nt_username"]/value)[1]','varchar(100)') AS nt_username,
p.q.value('(./action[@name="server_principal_name"]/value)[1]','varchar(100)') AS serverprincipalname,
p.q.value('(./action[@name="sql_text"]/value)[1]','Nvarchar(max)') AS sqltext
FROM #XML_Hold
CROSS APPLY BufferXml.nodes('/RingBufferTarget/event')p(q)
WHERE -- Ryelugu 03/05/2015 - Perf Optimize - Filtering the Buffered XML so as not to lookup at previoulsy loaded records into stage table
p.q.value('@timestamp[1]','datetime') >= ISNULL(@Prev_Insertion_time ,p.q.value('@timestamp[1]','datetime'))
AND p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') ='Commit' --Ryelugu 03/06/2015 - Every Event records a begin version and a commit version into Buffer ( XML ) we need the committed version
AND p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
AND p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
AND p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') <> 'Replication Monitor' --Ryelugu : 03/09/2015 We do not want any records being caprutred by Replication Monitor ??
SELECT GETDATE() GETDATE_4
-- SELECT * FROM #TEMP
-- SELECT COUNT(*) FROM #TEMP
-- SELECT GETDATE()
-- RETURN
-- PRINT '3'
--RETURN
INSERT INTO [dbo].[MN_DDLSchema_Changes_log]
[UserName]
,[DbName]
,[ObjectName]
,[client_app_name]
,[ClientHostName]
,[ServerName]
,[SQL_TEXT]
,[EE_Time_Stamp]
,[Event_Name]
SELECT
CASE WHEN T.nt_username IS NULL OR LEN(T.nt_username) = 0 THEN t.ServerPrincipalName
ELSE T.nt_username
END
,T.DbName
,T.objectname
,T.clientappname
,t.ClientHostName
,T.server_instance_name
,T.sqltext
,T.Time_Stamp_EE
,T.eventname
FROM
#TEMP T
/** -- RYelugu 03/06/2015 - Filters are now being applied directly while retrieving records from BUFFER or on XML
-- Ryelugu 03/15/2015 - More filters are likely to be added on further testing
WHERE ddl_Phase ='Commit'
AND ObjectType <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
AND ObjectName NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
AND T.Time_Stamp_EE >= @Prev_Insertion_time --Ryelugu 03/05/2015 - Performance Optimize
AND NOT EXISTS ( SELECT 1 FROM [dbo].[MN_DDLSchema_Changes_log] MN
WHERE MN.[ServerName] = T.server_instance_name -- Ryelugu Server Name needes to be added on to to xml ( Events in session )
AND MN.[DbName] = T.DbName
AND MN.[Event_Name] = T.EventName
AND MN.[ObjectName]= T.ObjectName
AND MN.[EE_Time_Stamp] = T.Time_Stamp_EE
AND MN.[SQL_TEXT] =T.SqlText -- Ryelugu 03/05/2015 This is a comparision Metric as well , But needs to decide on
-- Peformance Factor here , Will take advise from Lance if comparision on varchar(max) is a vital idea
--SELECT GETDATE()
--PRINT '4'
--RETURN
SELECT
top 100
[EE_Time_Stamp]
,[ServerName]
,[DbName]
,[Event_Name]
,[ObjectName]
,[UserName]
,[SQL_TEXT]
,[client_app_name]
,[Created_Date]
,[ClientHostName]
FROM
[dbo].[MN_DDLSchema_Changes_log]
ORDER BY [EE_Time_Stamp] desc
-- select getdate()
-- ** DELETE EVENTS after logging into Physical table
-- NEED TO Identify if this @XML can be updated into physical system table such that previously loaded events are left untoched
-- SET @XML.modify('delete /event/class/.[@timestamp="2015-03-06T13:01:19.020Z"]')
-- SELECT @XML
SELECT GETDATE() GETDATE_5
END
GO
Rajkumar Yelugu@@Version : ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Microsoft SQL Server 2012 - 11.0.5058.0 (X64)
May 14 2014 18:34:29
Copyright (c) Microsoft Corporation
Developer Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: ) (Hypervisor)
(1 row(s) affected)
Compatibility level is set to 110 .
One of the limitation states - XML columns with a depth of more than 128 nested nodes
How do i verify this ? Thanks .
Rajkumar Yelugu -
How can I get the selected rows from two ALV grids at the same time?
I have a program that uses two ALV grids in one dialog screen. I'm using the OO ALV model (SALV* classes).
The user can select any number of rows from each grid. Then, when a toolbar pushbutton is pressed, I'd have to retrieve the selected rows from both grids and start some processing with these rows.
It is no problem to assign event handlers to both grids, and use the CL_SALV_TABLE->GET_SELECTIONS and CL_SALV_SELECTIONS->GET_SELECTED_ROWS methods to find out which rows were marked by the user. Trouble is, this only works when I raise an event in each grid separately, for instance via an own function that I added to the grid's toolbar. So, I can only see the selected rows of the same grid where such an event was raised.
If I try to do this in the PBO of the dialog screen (that contains the two grids), the result of CL_SALV_SELECTIONS->GET_SELECTED_ROWS will be empty, as the program does not recognize the marked entries in the grids. Also, an event for grid1 does not see the selected rows from grid2 either.
As it is right now, I can have an own button in both grid's toolbar, select the rows, click on the extra button in each grid (this will tell me what entries were selected per grid). Then, I'd have to click on a third button (the one in the dialog screen's toolbar), and process the selected rows from both grids.
How can I select the rows, then click on just one button, and process the marked entries from both grids?
Is it somehow possible to raise an event belonging to each grid programmatically, so that then the corresponding CL_SALV_SELECTIONS->GET_SELECTED_ROWS will work?
Thanks.Hello Tamas ,
If I try to do this in the PBO of the dialog screen (that contains the two grids), the result of CL_SALV_SELECTIONS->GET_SELECTED_ROWS will be empty, as the program does not recognize the marked entries in the grids. Also, an event for grid1 does not see the selected rows from grid2 either.--->
is it possible to have a check box in each grid & get the selected lines in PAI of the screen ?
regards
prabhu -
MDX to fetch record from 1st of current month to 5th of next month and same for previous year
In my date dimension I have a attribute CalendarDate. I do have a hierarchy [Date].[Year].[Quarter].[Month].[CalendarDate] as well. I need to fetch data starting from 1st working day of current month to 5th working day of next month by MDX. I do have a attribute
to filter working day as IsWorkingDay. How can we get a dynamic MDX that will find the current month first and than it will filter the record from 1st working day of current month to 5th working day of next month. And same for the previous year same month
to compare.
Thanks in advance!
PalashHi P,
You can use a calculated member to dynamically add all the days in the current month and the first 5 in the following month. You will need to change measure, cube and hierarchy names.
with member measures.ThisMonthAnd5 as
sum([Date].[Year].parent.children,Measures.[Sales])
+sum(Head([Date].[Year].parent.nextmember.children,5),Measures.[Sales])
select
{Measures.[Sales]
,Measures.ThisMonthAnd5
} on 0,
non empty [Date].[Year].[CalendarDate]
on 1
from MyCube
Richard -
Performance issue in selecting data from a view because of not in condition
Hi experts,
I have a requirement to select data in a view which is not available in a fact with certain join conditions. but the fact table contains 2 crore rows of data. so this view is not working at all. it is running for long time. im pasting query here. please help me to tune it. whole query except prior to not in is executing in 15 minutes. but when i add not in condition it is running for so many hours as the second table has millions of records.
CREATE OR REPLACE FORCE VIEW EDWOWN.MEDW_V_GIEA_SERVICE_LEVEL11
SYS_ENT_ID,
SERVICE_LEVEL_NO,
CUSTOMER_NO,
BILL_TO_LOCATION,
PART_NO,
SRCE_SYS_ID,
BUS_AREA_ID,
CONTRACT,
WAREHOUSE,
ORDER_NO,
LINE_NO,
REL_NO,
REVISED_DUE_DATE,
REVISED_QTY_DUE,
QTY_RESERVED,
QTY_PICKED,
QTY_SHIPPED,
ABBREVIATION,
ACCT_WEEK,
ACCT_MONTH,
ACCT_YEAR,
UPDATED_FLAG,
CREATE_DATE,
RECORD_DATE,
BASE_WAREHOUSE,
EARLIEST_SHIP_DATE,
LATEST_SHIP_DATE,
SERVICE_DATE,
SHIP_PCT,
ALLOC_PCT,
WHSE_PCT,
ABC_CLASS,
LOCATION_ID,
RELEASE_COMP,
WAREHOUSE_DESC,
MAKE_TO_FLAG,
SOURCE_CREATE_DATE,
SOURCE_UPDATE_DATE,
SOURCE_CREATED_BY,
SOURCE_UPDATED_BY,
ENTITY_CODE,
RECORD_ID,
SRC_SYS_ENT_ID,
BSS_HIERARCHY_KEY,
SERVICE_LVL_FLAG
AS
SELECT SL.SYS_ENT_ID,
SL.ENTITY_CODE
|| '-'
|| SL.order_no
|| '-'
|| SL.LINE_NO
|| '-'
|| SL.REL_NO
SERVICE_LEVEL_NO,
SL.CUSTOMER_NO,
SL.BILL_TO_LOCATION,
SL.PART_NO,
SL.SRCE_SYS_ID,
SL.BUS_AREA_ID,
SL.CONTRACT,
SL.WAREHOUSE,
SL.ORDER_NO,
SL.LINE_NO,
SL.REL_NO,
SL.REVISED_DUE_DATE,
SL.REVISED_QTY_DUE,
NULL QTY_RESERVED,
NULL QTY_PICKED,
SL.QTY_SHIPPED,
SL.ABBREVIATION,
NULL ACCT_WEEK,
NULL ACCT_MONTH,
NULL ACCT_YEAR,
NULL UPDATED_FLAG,
SL.CREATE_DATE,
SL.RECORD_DATE,
SL.BASE_WAREHOUSE,
SL.EARLIEST_SHIP_DATE,
SL.LATEST_SHIP_DATE,
SL.SERVICE_DATE,
SL.SHIP_PCT,
0 ALLOC_PCT,
0 WHSE_PCT,
SL.ABC_CLASS,
SL.LOCATION_ID,
NULL RELEASE_COMP,
SL.WAREHOUSE_DESC,
SL.MAKE_TO_FLAG,
SL.source_create_date,
SL.source_update_date,
SL.source_created_by,
SL.source_updated_by,
SL.ENTITY_CODE,
SL.RECORD_ID,
SL.SRC_SYS_ENT_ID,
SL.BSS_HIERARCHY_KEY,
'Y' SERVICE_LVL_FLAG
FROM ( SELECT SL_INT.SYS_ENT_ID,
SL_INT.CUSTOMER_NO,
SL_INT.BILL_TO_LOCATION,
SL_INT.PART_NO,
SL_INT.SRCE_SYS_ID,
SL_INT.BUS_AREA_ID,
SL_INT.CONTRACT,
SL_INT.WAREHOUSE,
SL_INT.ORDER_NO,
SL_INT.LINE_NO,
MAX (SL_INT.REL_NO) REL_NO,
SL_INT.REVISED_DUE_DATE,
SUM (SL_INT.REVISED_QTY_DUE) REVISED_QTY_DUE,
SUM (SL_INT.QTY_SHIPPED) QTY_SHIPPED,
SL_INT.ABBREVIATION,
MAX (SL_INT.CREATE_DATE) CREATE_DATE,
MAX (SL_INT.RECORD_DATE) RECORD_DATE,
SL_INT.BASE_WAREHOUSE,
MAX (SL_INT.LAST_SHIPMENT_DATE) LAST_SHIPMENT_DATE,
MAX (SL_INT.EARLIEST_SHIP_DATE) EARLIEST_SHIP_DATE,
MAX (SL_INT.LATEST_SHIP_DATE) LATEST_SHIP_DATE,
MAX (
CASE
WHEN TRUNC (SL_INT.LAST_SHIPMENT_DATE) <=
TRUNC (SL_INT.LATEST_SHIP_DATE)
THEN
TRUNC (SL_INT.LAST_SHIPMENT_DATE)
ELSE
TRUNC (SL_INT.LATEST_SHIP_DATE)
END)
SERVICE_DATE,
MIN (
CASE
WHEN TRUNC (SL_INT.LAST_SHIPMENT_DATE) >=
TRUNC (SL_INT.EARLIEST_SHIP_DATE)
AND TRUNC (SL_INT.LAST_SHIPMENT_DATE) <=
TRUNC (SL_INT.LATEST_SHIP_DATE)
AND SL_INT.QTY_SHIPPED = SL_INT.REVISED_QTY_DUE
THEN
100
ELSE
0
END)
SHIP_PCT,
SL_INT.ABC_CLASS,
SL_INT.LOCATION_ID,
SL_INT.WAREHOUSE_DESC,
SL_INT.MAKE_TO_FLAG,
MAX (SL_INT.source_create_date) source_create_date,
MAX (SL_INT.source_update_date) source_update_date,
SL_INT.source_created_by,
SL_INT.source_updated_by,
SL_INT.ENTITY_CODE,
SL_INT.RECORD_ID,
SL_INT.SRC_SYS_ENT_ID,
SL_INT.BSS_HIERARCHY_KEY
FROM (SELECT SL_UNADJ.*,
DECODE (
TRIM (TIMA.DAY_DESC),
'saturday', SL_UNADJ.REVISED_DUE_DATE
- 1
- early_ship_days,
'sunday', SL_UNADJ.REVISED_DUE_DATE
- 2
- early_ship_days,
SL_UNADJ.REVISED_DUE_DATE - early_ship_days)
EARLIEST_SHIP_DATE,
DECODE (
TRIM (TIMB.DAY_DESC),
'saturday', SL_UNADJ.REVISED_DUE_DATE
+ 2
+ LATE_SHIP_DAYS,
'sunday', SL_UNADJ.REVISED_DUE_DATE
+ 1
+ LATE_SHIP_DAYS,
SL_UNADJ.REVISED_DUE_DATE + LATE_SHIP_DAYS)
LATEST_SHIP_DATE
FROM (SELECT NVL (s2.sys_ent_id, '00') SYS_ENT_ID,
cust.customer_no CUSTOMER_NO,
cust.bill_to_loc BILL_TO_LOCATION,
cust.early_ship_days,
CUST.LATE_SHIP_DAYS,
ord.PART_NO,
ord.SRCE_SYS_ID,
ord.BUS_AREA_ID,
ord.BUS_AREA_ID CONTRACT,
NVL (WAREHOUSE, ord.entity_code) WAREHOUSE,
ORDER_NO,
ORDER_LINE_NO LINE_NO,
ORDER_REL_NO REL_NO,
TRUNC (REVISED_DUE_DATE) REVISED_DUE_DATE,
REVISED_ORDER_QTY REVISED_QTY_DUE,
-- NULL QTY_RESERVED,
-- NULL QTY_PICKED,
SHIPPED_QTY QTY_SHIPPED,
sold_to_abbreviation ABBREVIATION,
-- NULL ACCT_WEEK,
-- NULL ACCT_MONTH,
-- NULL ACCT_YEAR,
-- NULL UPDATED_FLAG,
ord.CREATE_DATE CREATE_DATE,
ord.CREATE_DATE RECORD_DATE,
NVL (WAREHOUSE, ord.entity_code)
BASE_WAREHOUSE,
LAST_SHIPMENT_DATE,
TRUNC (REVISED_DUE_DATE)
- cust.early_ship_days
EARLIEST_SHIP_DATE_UnAdj,
TRUNC (REVISED_DUE_DATE)
+ CUST.LATE_SHIP_DAYS
LATEST_SHIP_DATE_UnAdj,
--0 ALLOC_PCT,
--0 WHSE_PCT,
ABC_CLASS,
NVL (LOCATION_ID, '000') LOCATION_ID,
--NULL RELEASE_COMP,
WAREHOUSE_DESC,
NVL (
DECODE (MAKE_TO_FLAG,
'S', 0,
'O', 1,
'', -1),
-1)
MAKE_TO_FLAG,
ord.CREATE_DATE source_create_date,
ord.UPDATE_DATE source_update_date,
ord.CREATED_BY source_created_by,
ord.UPDATED_BY source_updated_by,
ord.ENTITY_CODE,
ord.RECORD_ID,
src.SYS_ENT_ID SRC_SYS_ENT_ID,
ord.BSS_HIERARCHY_KEY
FROM EDW_DTL_ORDER_FACT ord,
edw_v_maxv_cust_dim cust,
edw_v_maxv_part_dim part,
EDW_WAREHOUSE_LKP war,
EDW_SOURCE_LKP src,
MEDW_PLANT_LKP s2,
edw_v_incr_refresh_ctl incr
WHERE ord.BSS_HIERARCHY_KEY =
cust.BSS_HIERARCHY_KEY(+)
AND ord.record_id = part.record_id(+)
AND ord.part_no = part.part_no(+)
AND NVL (ord.WAREHOUSE, ord.entity_code) =
war.WAREHOUSE_code(+)
AND ord.entity_code = war.entity_code(+)
AND ord.record_id = src.record_id
AND src.calculate_back_order_flag = 'Y'
AND NVL (cancel_order_flag, 'N') != 'Y'
AND UPPER (part.source_plant) =
UPPER (s2.location_code1(+))
AND mapping_name = 'MEDW_MAP_GIEA_MTOS_STG'
-- AND NVL (ord.UPDATE_DATE, SYSDATE) >=
-- MAX_SOURCE_UPDATE_DATE
AND UPPER (
NVL (ord.order_status, 'BOOKED')) NOT IN
('ENTERED', 'CANCELLED')
AND TRUNC (REVISED_DUE_DATE) <= SYSDATE) SL_UNADJ,
EDW_TIME_DIM TIMA,
EDW_TIME_DIM TIMB
WHERE TRUNC (SL_UNADJ.EARLIEST_SHIP_DATE_UnAdj) =
TIMA.ACCOUNT_DATE
AND TRUNC (SL_UNADJ.LATEST_SHIP_DATE_Unadj) =
TIMB.ACCOUNT_DATE) SL_INT
WHERE TRUNC (LATEST_SHIP_DATE) <= TRUNC (SYSDATE)
GROUP BY SL_INT.SYS_ENT_ID,
SL_INT.CUSTOMER_NO,
SL_INT.BILL_TO_LOCATION,
SL_INT.PART_NO,
SL_INT.SRCE_SYS_ID,
SL_INT.BUS_AREA_ID,
SL_INT.CONTRACT,
SL_INT.WAREHOUSE,
SL_INT.ORDER_NO,
SL_INT.LINE_NO,
SL_INT.REVISED_DUE_DATE,
SL_INT.ABBREVIATION,
SL_INT.BASE_WAREHOUSE,
SL_INT.ABC_CLASS,
SL_INT.LOCATION_ID,
SL_INT.WAREHOUSE_DESC,
SL_INT.MAKE_TO_FLAG,
SL_INT.source_created_by,
SL_INT.source_updated_by,
SL_INT.ENTITY_CODE,
SL_INT.RECORD_ID,
SL_INT.SRC_SYS_ENT_ID,
SL_INT.BSS_HIERARCHY_KEY) SL
WHERE (SL.BSS_HIERARCHY_KEY,
SL.ORDER_NO,
Sl.line_no,
sl.Revised_due_date,
SL.PART_NO,
sl.sys_ent_id) NOT IN
(SELECT BSS_HIERARCHY_KEY,
ORDER_NO,
line_no,
revised_due_date,
part_no,
src_sys_ent_id
FROM MEDW_MTOS_DTL_FACT
WHERE service_lvl_flag = 'Y');
thanks
asnAlso 'NOT IN' + nullable columns can be an expensive combination - and may not give the expected results. For example, compare these:
with test1 as ( select 1 as key1 from dual )
, test2 as ( select null as key2 from dual )
select * from test1
where key1 not in
( select key2 from test2 );
no rows selected
with test1 as ( select 1 as key1 from dual )
, test2 as ( select null as key2 from dual )
select * from test1
where key1 not in
( select key2 from test2
where key2 is not null );
KEY1
1
1 row selected.Even if the columns do contain values, if they are nullable Oracle has to perform a resource-intensive filter operation in case they are null. An EXISTS construction is not concerned with null values and can therefore use a more efficient execution plan, leading people to think it is inherently faster in general. -
I backed up the phone and updated to ios 7.0.4 which took about 2 hours. Now the phone powers on and says hello in different lagnuages and says to slide to setup. I am asked to choose a network and then if I want to setup as a new iphone, restore from icloud or restore from itunes backup. When I select restore from itunes the screen says "connected to itunes". In itunes I select restore from backup however after a few seconds itunes says that it cannot restore beause there is not enough free space on the iphone. During this attempt to restore I see the home screen and my screen saver and everything for a split second, then the phone powers off.
I see in itunes that all of my photos and data are still on the phone however I cannot get into the iphone.Have you tried restarting or resetting your iPhone?
Restart: Press On/Off button until the Slide to Power Off slider appears, select Slide to Power Off and, after It shuts down, press the On/Off button until the Apple logo appears.
Reset: Press the Home and On/Off buttons at the same time and hold them until the Apple logo appears (about 10 seconds).
No data will be lost.
Maybe you are looking for
-
I am having trouble with downloading Muse from CC.
My CC desk top manager is stating that i have Muse or already downloaded it. I went to my application folder and Muse is not there! I tried logging in to CC.com & CC desk top manager to redownload, but it won't let me.
-
Just got back from my hols to find my Humax box now has a new feature. Everytime I turn it on my Sony TV goes into 'WIDE ZOOM' picture mode giving me a stretched picture. I notice a software update has happened, is this a known bug with the March upd
-
Switching between the firewire connection and USB 2.0 Connection
Is it possible to switch between the firewire connection and USB 2.0 Connection on a portable external hard drive? I've tried unsuccessfully to use the firewire connection on my external hard drive after using the 2.0 connection. No machine will read
-
Question about XML file transferring over the networking
Hi, I am now to Java, and now I am going to set up a simple network in the lab. I have created a random array of data and transferred to XML file on my client. Now, I would like to send it to the server. I am wondering how I can put the XML file into
-
Error in recreating SE16 Transaction
Hello All, Using the Thomas Jung's blog I'm trying to recreate the SE16 Transaction. But now I'm facing a Problem . When I execute my Application I'm getting the following runtime Error : Error when processing your request What has happened? The URL