DBMS_DATAPUMP exclude
Hi ,
How to exclude the index,synomys,grants,statistics while doing import using DBMS_DATAPUMP.
Could anyone pls give me the syntax to exculde those things .
Thanks
JP
Hi JP,
The problem is the single quotes - try this example (just update your schema name as appropriate etc):
DECLARE
l_dp_handle NUMBER;
l_last_job_state VARCHAR2(30) := 'UNDEFINED';
l_job_state VARCHAR2(30) := 'UNDEFINED';
l_sts KU$_STATUS;
v_job_state varchar2(4000);
BEGIN
l_dp_handle := DBMS_DATAPUMP.open(operation => 'EXPORT',
job_mode => 'SCHEMA',
remote_link => NULL,
version => 'LATEST');
DBMS_DATAPUMP.add_file(handle => l_dp_handle,
filename => 'test.dmp',
directory => 'DATA_PUMP_DIR',
reusefile => 1);
DBMS_DATAPUMP.add_file(handle => l_dp_handle,
filename => 'test.log',
directory => 'DATA_PUMP_DIR',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE,
reusefile => 1);
DBMS_DATAPUMP.METADATA_FILTER(l_dp_handle, 'SCHEMA_LIST', '''ALIGNE''');
DBMS_DATAPUMP.METADATA_FILTER(l_dp_handle,
'EXCLUDE_PATH_EXPR',
'IN (''INDEX'', ''SYNONYMS'',''GRANTS'',''STATISTICS'')');
DBMS_DATAPUMP.start_job(l_dp_handle);
DBMS_DATAPUMP.WAIT_FOR_JOB(l_dp_handle, v_job_state);
DBMS_OUTPUT.PUT_LINE(v_job_state);
END;Cheers,
Harry
Similar Messages
-
Particular Tigger exclude using DBMS_DATAPUMP export in SCHEMA mode, how?
Hi,
Is it possible to specify exclude filter for particular object type and name (TRIGGER, name = TRG_CHECK)
when DBMS_DATAPUMP export in SCHEMA mode is used?
If yes, how? (I could not find any examples).
Thanks.Hi,
Yes, it 's possbile
Go for "DBMS_DATAPUMP.metadata_filter"
tr it I did not checked
Where you can use name expression at table mode as NAME_EXPR as '!=''DEPT'''
or
dbms_datapump.metadata_filter(vhandle,’EXCLUDE_PATH_LIST’,”’TRIGGER”,”GRANT”’);
DBMS_DATAPUMP.metadata_filter( handle => l_dp_handle,
name => 'NAME_EXPR',
value => 'NOT IN (''TRIG_NAME')',
object_type => 'TRIGGER');
- Pavan Kumar N -
How to include user_sequences in a dbms_datapump procedure
how to include user_sequences in a dbms_datapump procedure
hi,
i have a procedure to move all tables from MYSCHEMA_1 to MYSCHEMA_2 over a db_link
using dbms_datapump ;
That works fine
but i need to copy the user_ sequences too.
a detail if my procedure looks like this :
## create Job in TABLE-Mode :
job_handle := dbms_datapump.open (
operation => 'IMPORT'
,job_mode => 'TABLE'
,remote_link => p_database_link
,job_name => vjob_name
## exclude same Tables:
dbms_datapump.metadata_filter (
handle => job_handle,
name => 'NAME_EXPR',
value => 'NOT IN (''TABLE_EXC_1'', ''TABLE_EXC_2'' , ''TABLE_EXC_3'' )'
I found this to work with plain EXPDB :
EXPDB INCLUDE=SEQUENCE: in ('SEQ','SEQ2') INCLUDE=TABLE: in ('TABLE','TABLE2')
How to include user_sequences when working with dbms_datapump - API,
maybe working with dbms_datapump.metadata_filter ?HI,
you are right,
to include SEQUENCES and other Objects like VIEW or FUNCTION
does only work in SCHEMA-Mode
But the next problem is
when want to refresh the dump
I can refresh the tables only
dbms_datapump.set_parameter(
handle => job_handle,
name => 'TABLE_EXISTS_ACTION',
value => 'REPLACE'
not the other objects like SEQUENCE or FUNCTION
found in OracleSupport Note 1323411.1
would nice to have a parameter OBJECT_EXISTS_ACTION in future -
How to exclude statistic using Data Pump API?
How to exclude all statistics while exporting data using Oracle Data Pump API (DBMS_DATAPUMP package)?
You would call the metadata filter api like this:
dbms_datapump.METADATA_FILTER(
handle = your_handle_here,
name = 'EXCLUDE_PATH_LIST',
value = 'STATISTICS');
Hope this helps.
Dean -
How to exclude a partition from schema mode export?
I am using Oracle 10g Data Pump Export utility expdp. What I am trying to do is to export a single schema, except for a certain partition P in table T.
I have tried:
expdp user/pass@db dumpfile=... logfile=... exclude=table:" = 'T:P' "
(please ignore the OS specific text escaping issue)
It doesn't work. The whole table T gets exported.
Is there a way to exclude partitions from schema mode export?
If not, is there a way I can achieve the same with DBMS_DATAPUMP API?
Edited by: 950367 on 2-Aug-2012 10:42 AMTry “QUERY” Data Pump Export parameter. Unless your partition is a hash partition, you should be able to constrict a where clause that gets you all records in the table, except the records for the partition you want to exclude.
For instance if you want to exclude a range partition (between 100 and 200) on the col1 column you should put something like this:
QUERY= USR.TAB:”WHERE COL1 < 100 OR COL1 > 200”This approach would export the partition definition though.
Iordan Iotzov
http://iiotzov.wordpress.com/
Edited by: Iordan Iotzov on Aug 2, 2012 10:55 AM -
Exclude create user statements using datapump API
I’m trying to perform a schema import and exclude the "create user" statements using the datapump API but I can’t get the syntax correct for the dbms_datapump.metadata_filter call.
Using impdp I use a parfile that includes the following statements:
schemas=bob, john
exclude=user
How do I achieve the same effect using the dbms_datapump.metadata_filter API?
Any assistance greatly appreciated.
GavinDid you ever figure out your issue? I'm having the same issue after I try to set attributes.
-
How to Exclude table like %AUDIT% using the Datapump API
Hello,
I am trying to use the datapump API to import table across a database link excluding table with a name like %AUDIT%. I have it all working except the table execution. I am using Oracle 11.1.0.6.this works in my app
in DB R11.2
in job_mode = 'TABLE'
dbms_datapump.metadata_filter (
handle => job_handle,
name => 'NAME_EXPR',
value => 'NOT IN (''TABLENAME_1'', ''TABLENAME_2'' , ''TABLE_NAME_3'' )'
try this works for you
dbms_datapump.metadata_filter (
handle => job_handle,
name => 'NAME_EXPR',
value => 'NOT LIKE ''%AUDIT%'''
Edited by: astramare on Jul 7, 2011 2:57 PM -
Expdp include with dbms_datapump API
Hi,
I am trying to translate the following command to use it with the dbms_datapump API but I m having difficulty to write the correct syntax for the INCLUDE=SCHEMA:\"=\'myschema\'\" section.
expdp dumpfile=mydirectoryp:mydumpfile.dp INCLUDE=SCHEMA:\"=\'myschema\'\" logfile=mydirectoryg:mylogfile.log full=y
Could someone help me with this ?
Thank you
JPHi,
I'm working with JP on that...
We know that it will produce the same result but it was proposer in an Oracle SR as a workaround to bug # 7362589 (as listed in Note 1253955.1) and it does the job.
I tried this solution, but the result is not the same. I mean, the log files do not contains the same result and the dump files do not have the same size.
Soltution 1, command line:
expdp dumpfile=dbadev_dp:exp_TSTBASEEXPORTIMPORT_1.dp INCLUDE=SCHEMA:\"=\'TSTBASEEXPORTIMPORT\'\" logfile=dbadev_log:exp_TSTBASEEXPORTIMPORT_log.log full=y
LogFile:
[oracle@qcdvcn1001-dev711 dbadev]$ cat exp_TSTBASEEXPORTIMPORT_log.log
Export: Release 10.2.0.4.0 - 64bit Production on Tuesday, 19 April, 2011 9:33:08
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "DBADEV"."SYS_EXPORT_FULL_01": dbadev/******** dumpfile=dbadev_dp:exp_TSTBASEEXPORTIMPORT_1.dp INCLUDE=SCHEMA:"='TSTBASEEXPORTIMPORT'" logfile=dbadev_log:exp_TSTBASEEXPORTIMPORT_log.log full=y
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 256 KB
Processing object type DATABASE_EXPORT/SCHEMA/USER
Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/PROCEDURE
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/ALTER_PROCEDURE
. . exported "TSTBASEEXPORTIMPORT"."TESTA" 4.929 KB 1 rows
. . exported "TSTBASEEXPORTIMPORT"."TESTB" 0 KB 0 rows
Master table "DBADEV"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
Dump file set for DBADEV.SYS_EXPORT_FULL_01 is:
/mnt/e3be11/oracle/dump/dbadev/exp_TSTBASEEXPORTIMPORT_1.dp
Job "DBADEV"."SYS_EXPORT_FULL_01" successfully completed at 09:51:38
Dump file:
-rw-rw---- 1 oracle dba *229376* Apr 19 09:51 exp_TSTBASEEXPORTIMPORT_1.dp
Soltution 2, dbms_datapump:
lv_handle := dbms_datapump.open(operation => 'EXPORT',job_mode => 'FULL',job_name => lv_jobName,version => 'LATEST');
dbms_datapump.metadata_filter(handle => lv_handle,name => 'NAME_EXPR', value => '='''||upper(pin_request.sourceUser)||'''',object_type => 'SCHEMA');
LogFile:
[oracle@qcdvcn1001-dev711 dbadev]$ cat exp_dev711_TSTBASEEXPORTIMPORT_201104190956.log
Starting "DBADEV"."EXP_TSTBASEEXPORTIMPORT":
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 256 KB
Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/SYS_USER/USER
Processing object type DATABASE_EXPORT/SCHEMA/USER
Processing object type DATABASE_EXPORT/ROLE
Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
Processing object type DATABASE_EXPORT/RESOURCE_COST
Processing object type DATABASE_EXPORT/TRUSTED_DB_LINK
Processing object type DATABASE_EXPORT/DIRECTORY/DIRECTORY
Processing object type DATABASE_EXPORT/DIRECTORY/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/DIRECTORY/GRANT/CROSS_SCHEMA/OBJECT_GRANT
Processing object type DATABASE_EXPORT/CONTEXT
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/PROCEDURE
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/ALTER_PROCEDURE
. . exported "TSTBASEEXPORTIMPORT"."TESTA" 4.929 KB 1 rows
. . exported "TSTBASEEXPORTIMPORT"."TESTB" 0 KB 0 rows
Master table "DBADEV"."EXP_TSTBASEEXPORTIMPORT" successfully loaded/unloaded
Dump file set for DBADEV.EXP_TSTBASEEXPORTIMPORT is:
/mnt/e3be11/oracle/dump/dbadev/tstbaseexportimport_201104190956_01.dp
Job "DBADEV"."EXP_TSTBASEEXPORTIMPORT" successfully completed at 10:16:28
Dump file:
-rw-rw---- 1 oracle dba *405504* Apr 19 10:16 tstbaseexportimport_201104190956_01.dp
Do we need to exclude the difference (bold sections of the second log file) with a filter of type EXCLUDE_PATH_EXPR ?
thanks for your help!
jonathan -
Can I exclude places.sqlite from time machine backup?
Hi,
Just looking at the time machine backup and raised a question about the repeated backup of places.sqlite in "~/Library/Application Support/Firefox/Profiles/xxxxxxxx.default/", I am wondering if I can exclude this file from my backup or even exclude Profiles from my backup?
What would this affect my backup? Are these all essentials?
I have looked into this page and understand what places.sqlite does. However, what don't know is if this file is missing, does Firefox produce one automatically?
Or say, if each of my places.sqlite in the prevoius backups was like 90MB, how much size would them really take in the storage?Alvyn wrote:
Thus, I am thinking if this "places.sqlite" thing is there every hour putting around 100MB to my time capsule, quickly my time capsule will be depleted.
No, that's very small.
Is there any way to do something like trimming time points from time capsule?
Under normal circumstances, you shouldn't have to. TM automatically "thins" (deletes) backups every time it does a new backup, on the following schedule:
"Hourly" backups after 24 hours (except the first of the day, which is a "Daily" backup).
"Daily" backups after a month (except the first of each week, which is a "Weekly" backup.)
"Weekly" backups are kept until TM needs the space for new backups; then one or more of the oldest weeklies will be deleted.
So after a month, even if you do an hourly backup every day, and that 100 mb file is changed and backed-up +*every time+* (not real likely), you'd have 54 backups (1 per day plus 24). That would take a total of 5.4 GB, small potatoes on any TM drive, and really not worth worrying about.
But if you want, you could delete all the backups of it. See #12 in the Frequently Asked Questions *User Tip,* also at the top of this forum. -
ASA 5510 Firewall internet Restriction based on IP address and block rest users excluding Mails
Hi,
As i have assignment to create access list based on IP address like we have to allow internet access this IP range 192.168.172.201 to 212.
And rest users we have to block excluding Mails.
Please help.
Thanks,
Regards,
Hemant Yadavlogin as: Rakh
[email protected]'s
password:
Type help or '?' for a list of available commands.
FAST-HQ-ASA> en
Password:
Invalid password
Password: ***********
FAST-HQ-ASA# show rum
^
ERROR: % Invalid input detected at '^' marker.
FAST-HQ-ASA# show run
: Saved
ASA Version 8.3(1)
hostname FAST-HQ-ASA
enable password 7tt1ICjiO2a2/Hn2 encrypted
passwd U8oee3lIrDCUmSK2 encrypted
names
interface Ethernet0/0
description ASA Outside segment
speed 100
duplex full
nameif OUTSIDE
security-level 0
ip address 62.173.33.67 255.255.255.240
interface Ethernet0/1
description VLAN AGGREGATION point
no nameif
no security-level
no ip address
interface Ethernet0/1.2
description INSIDE segment (User)
vlan 2
nameif INSIDE
security-level 100
ip address 192.168.172.1 255.255.255.0
interface Ethernet0/1.3
description LAN
vlan 3
nameif LAN
security-level 100
ip address 192.168.173.1 255.255.255.0
interface Ethernet0/2
shutdown
no nameif
no security-level
no ip address
interface Ethernet0/3
shutdown
no nameif
no security-level
no ip address
interface Management0/0
nameif management
security-level 100
ip address 192.168.1.1 255.255.255.0
management-only
ftp mode passive
same-security-traffic permit inter-interface
same-security-traffic permit intra-interface
object network INSIDE
subnet 192.168.172.0 255.255.255.0
object network LAN
subnet 192.168.173.0 255.255.255.0
object network MAIL-SERVER
host 192.168.172.32
object network DENY-IP-INTERNET
range 192.168.172.121 192.168.172.200
object-group service serBLOCK-INTERNET tcp
port-object eq www
object-group network BLOCK-IP-INTERNET
network-object object DENY-IP-INTERNET
access-list 102 extended permit icmp any any time-exceeded
access-list 102 extended permit icmp any any echo-reply
access-list OUTSIDE-IN extended permit tcp any host 192.168.172.32 eq smtp
access-list OUTSIDE-IN extended permit tcp any host 192.168.172.32 eq https
access-list BLOCK-WWW extended deny tcp object-group BLOCK-IP-INTERNET any object-group serBLOCK-INTERNET
access-list BLOCK-WWW extended permit ip any any
pager lines 24
logging asdm informational
mtu OUTSIDE 1500
mtu INSIDE 1500
mtu LAN 1500
mtu management 1500
icmp unreachable rate-limit 1 burst-size 1
no asdm history enable
arp timeout 14400
object network INSIDE
nat (INSIDE,OUTSIDE) dynamic interface
object network LAN
nat (LAN,OUTSIDE) dynamic interface
object network MAIL-SERVER
nat (INSIDE,OUTSIDE) static 62.173.33.70
access-group OUTSIDE-IN in interface OUTSIDE
access-group BLOCK-WWW out interface OUTSIDE
route OUTSIDE 0.0.0.0 0.0.0.0 62.173.33.65 1
timeout xlate 3:00:00
timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02
timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00
timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00
timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute
timeout tcp-proxy-reassembly 0:01:00
dynamic-access-policy-record DfltAccessPolicy
aaa authentication ssh console LOCAL
http server enable
http 192.168.1.0 255.255.255.0 management
no snmp-server location
no snmp-server contact
snmp-server enable traps snmp authentication linkup linkdown coldstart
crypto ipsec security-association lifetime seconds 28800
crypto ipsec security-association lifetime kilobytes 4608000
vpn-addr-assign local reuse-delay 5
telnet timeout 5
ssh 192.168.172.37 255.255.255.255 INSIDE
ssh 192.168.173.10 255.255.255.255 LAN
ssh timeout 5
console timeout 0
threat-detection basic-threat
threat-detection statistics access-list
no threat-detection statistics tcp-intercept
webvpn
username Rakh password EV9pEo1UkhHJSbIW encrypted
class-map inspection_default
match default-inspection-traffic
policy-map type inspect dns preset_dns_map
parameters
message-length maximum client auto
message-length maximum 512
policy-map global_policy
class inspection_default
inspect dns preset_dns_map
inspect ftp
inspect h323 h225
inspect h323 ras
inspect rsh
inspect rtsp
inspect esmtp
inspect sqlnet
inspect skinny
inspect sunrpc
inspect xdmcp
inspect sip
inspect netbios
inspect tftp
inspect ip-options
service-policy global_policy global
prompt hostname context
call-home
profile CiscoTAC-1
no active
destination address http
https://tools.cisco.com/its/service/oddce/services/DDCEService
destination address email
[email protected]
destination transport-method http
subscribe-to-alert-group diagnostic
subscribe-to-alert-group environment
subscribe-to-alert-group inventory periodic monthly
subscribe-to-alert-group configuration periodic monthly
subscribe-to-alert-group telemetry periodic daily
Cryptochecksum:1ee78d19f958efc6fd95f5e9d4e97b8d
: end
FAST-HQ-ASA# -
Is there any way to browse all sales excluding rental only & short films?
Is there any way to browse all film sales excluding rental only & short films?
I am finding the laborious process of clicking through page by page in cover view annoying to the extreme to search new additions to the available film sales lists every week or so. The display page is always slightly deeper than my window so it means having to scroll down to view the last row of covers and click next page.
Apple... Is there any chance of having an auto scrolling (with pause and previous/next feature) film cover flow with description view of each film large enough to view the image, preferably filling the screen, perhaps also a feature in Front Row that will show available films to buy/rent in iTunes or an RSS screen saver that could give you a constant update of the last 10/20 films added?
Something that will make the task of searching for titles you might like to buy less of a 'task' and more intuitive. I don't really want to browse by Genre and a Cover image only view tells you nothing about the film.
If the screen in front of me gave me a constantly (controllable when required) changing view, similar to the content displayed in Front Row of a title you have selected i'd be happy.
Message was edited by: Rob WiltonCould you please tell me how to view all the movies I have rented?
-
How to exclude Web App from search results
Hi
Search results link to a unstyled Web App instead to the actual page it resides in.
Please do this:
1. go to: http://kinship.businesscatalyst.com/
2. search for "Michael" on the top global search
3. on the search results page click on the name (link).
4. you will see Michael's web app item not the actuall page it resides in (http://kinship.businesscatalyst.com/About/the-team)
How to avoid getting web app results in search?
Thanks
MichaHi Micha
Just add “&OT=35 “ at the end of the action in your search form:
Ex:
<form name=”xxxx” method=”post” action=”/Default.aspx?SiteSearchID=3566&ID=/results&OT=35”>
<div class=”search-box”><input type=”text” class= ............../>
<input type=”submit” class=”cat_button” value=”search” />
</div>
</form>
Here are the rest of the content types IDs, should you come across similar situations in the future:
Web Pages = 1
Literature = 6
Announcements = 7
FAQs = 9
Forums = 43
Blogs = 55
Web Apps = 35
Catalogs = 26
Bookings = 48
You can exclude multiple areas from a search, simply list them with commas: &OT=35,1,6 -
Exclude duplicate values on SQL where clause statement
Hi!
Are some posibilities to exclude duplicate values do not using sql aggregate functions in main select statement?
Priview SQL statement
SELECT * FROM
select id,hin_id,name,code,valid_date_from,valid_date_to
from diaries
QRSLT
WHERE (hin_id = (SELECT NVL(historic_id,id)FROM tutions where id=/*???*/ 59615))
AND NVL(valid_date_to,to_date('22.12.2999','dd.mm.yyyy')) <= (SELECT NVL(valid_date_to,to_date('22.12.2999','dd.mm.yyyy'))FROM tutions where id= /*???*/ 59615)
AND trunc(valid_date_from) >=(SELECT trunc(valid_date_from)FROM tutions where id= /*???*/ 59615)
The result
ID HIN_ID NAME CODE VALID_DATE FROM VALID_DATE_TO
50512
59564
RE TU
01
07.06.2013 16:32:15
07.06.2013 16:33:28
50513
59564
TT2
02
07.06.2013 16:33:23
07.06.2013 16:33:28
50515
59564
TT2
02
07.06.2013 16:33:28
07.06.2013 16:34:42
50516
59564
ROD
03
07.06.2013 16:34:37
07.06.2013 16:34:42
VALID_DATE_TO & AND VALID_DATE_FROM tutions
07.06.2013 16:34:42
15.07.2013 10:33:23
In this case i got duplicate of entry TT2 id 50513 In main select statement cant use agregate functions are even posible to exclude this value from result modifying only the QLRST WHERE clause (TRUNC need to be here)
THANKS FOR ANY TIP !
ID.Hi, Ok this is working in this case
SELECT * FROM
select id,hin_id,name,code,valid_date_from,valid_date_to
from diaries ahs
QRSLT
WHERE (hin_id = (SELECT NVL(historic_id,id)FROM aip_healthcare_tutions where id=/*???*/ 59615))
AND NVL(valid_date_to,to_date('22.12.2999','dd.mm.yyyy')) <= (SELECT NVL(valid_date_to,to_date('22.12.2999','dd.mm.yyyy'))FROM tutions where id= /*???*/ 59615)
AND trunc(valid_date_from) >=(SELECT trunc(valid_date_from)FROM tutions where id= /*???*/ 59615)
AND NOT EXISTS
(SELECT null FROM diaries ahs WHERE ahs.valid_date_from < QRSLT.valid_date_from
AND QRSLT.hin_id=ahs.hin_id
AND QRSLT.code=ahs.code);
Result
50512
59564
RE TU
01
07.06.2013 16:32:15
07.06.2013 16:33:28
50513
59564
TT2
02
07.06.2013 16:33:23
07.06.2013 16:33:28
50516
59564
ROD
03
07.06.2013 16:34:37
07.06.2013 16:34:42
But if the Data in tutions row are theese(valid_date_to-null) then NO ROWS are returning and its logical because in full result list Valid_date_from column are logical incorect
valid_date_from valid_date_to
15.07.2013 10:33:23
NULL
ID HIN_ID NAME CODE VALID_DATE FROM VALID_DATE_TO
50510
59564
RE TU
01
07.06.2013 16:33:28
50511
59564
TT2
02
07.06.2013 16:34:41
50514
59564
ROD
03
07.06.2013 16:34:41
50520
59564
Params
04
03.07.2013 21:01:30
50512
59564
RE TU
01
07.06.2013 16:32:15
07.06.2013 16:33:28
50513
59564
TT2
02
07.06.2013 16:33:23
07.06.2013 16:33:28
50515
59564
TT2
02
07.06.2013 16:33:28
07.06.2013 16:34:42
50516
59564
ROD
03
07.06.2013 16:34:37
07.06.2013 16:34:42
Are that posible modifying where statement if the valid_date_to in tutions are null then theese records where in diary valid_date_to is null is correct to, but need to stay previos logic
D HIN_ID NAME CODE VALID_DATE FROM VALID_DATE_TO
50510
59564
RE TU
01
07.06.2013 16:33:28
null
50511
59564
TT2
02
07.06.2013 16:34:41
null
50514
59564
ROD
03
07.06.2013 16:34:41
null
50520
59564
Params
04
03.07.2013 21:01:30
null
Thanks !
ID. -
How do I use the Exclude in the customer exit
I have to create a customer variable which will be passed back to the program and used for an exclusion. the user enters company codes and this passes into the user exit. In the exit i need to just place and S in from of then and these need to be used to exclude data. I tried restricting the variable in the query but it doesn't allow me to exclude. How do i accomplish this?
Hi MICK,
If you want to write code in customer exit and exclude some values for the variable in query,note the following things
- l_s_range-sign = 'E'.
Here 'E' means exclude
- Create the variable with OPTIONAL
Otherwise your customer exit won't work
Hope this helps.
Assign points if helpful
Regards,
Aaron Wang -
Inconsistency between get-childitem -include and -exclude parameters
Hi,
Powershell 2.0
Does anyone else consider this a minor design bug in the Get-ChildItem command?
# create dummy files
"a","b","c" | % {$null | out-file "c:\temp\$_.txt"}
# this "fails", returns nothing
get-childitem c:\temp -include a*,b*
# this "works", returns desired files
get-childitem c:\temp\* -include a*,b*
# this "works", excludes undesired files
get-childitem c:\temp -exclude a*,b*
# this "fails", excludes undesired files BUT RECURSES sub-directories
get-childitem c:\temp\* -exclude a*,b*
I'm writing a wrapper script around the GCI cmdlet, but the inconsistency between the two parameters is problematic. My end user will surely just type a path for the path parameter, then wonder why -include returned nothing. I can't unconditionally
add an asterisk to the path parameter, since that messes up the exclude output.
I'm just wondering why Microsoft didn't make the parameter interaction consistent???
# includes desired files in the specified path
get-childitem -path c:\temp -include a*,b*
# excludes undesired files in the specified path
get-childitem -path c:\temp -exclude a*,b*
# combine both options
get-childitem -path c:\temp -include a*,b* -exclude *.log,*.tmp
# same as above, the asterisk doesn't matter
get-childitem -path c:\temp\* -include a*,b*
get-childitem -path c:\temp\* -exclude a*,b*
get-childitem -path c:\temp\* -include a*,b* -exclude *.log,*.tmp
# same as above, but explicitly recurse if that's what you want
get-childitem -path c:\temp\* -include a*,b* -recurse
get-childitem -path c:\temp\* -exclude a*,b* -recurse
get-childitem -path c:\temp\* -include a*,b* -exclude *.log,*tmp -recurse
If I execute the "naked" get-childitem command, the asterisk doesn't matter...
# same results
get-childitem c:\temp
get-chileitem c:\temp\*
If this isn't considered a bug, can you explain why the inconsistency between the two parameters when combined with the -path parameter?
Thanks,
ScottThe Get-ChildItem cmdlet syntax is horrific for advanced use. It's not a bug in the classic sense, so you shouldn't call it that. However, feel free to call it awful, ugly, disastrous, or any other deprecatory adjective you like - it really is
nasty.
Get-ChildItem's unusual behavior is rooted in one of the more 'intense' dialogues between developers and users in the beta period. Here's how I recall it working out; some details are a bit fuzzy for me at this point.
Get-ChildItem's original design was as a tool for enumerating items in a namespace -
similar to but not equivalent to dir and
ls. The syntax and usage was going to conform to standard PowerShell (Monad at the time) guidelines.
In a nutshell, what this means is that the Path parameter would have truly just meant Path - it would not have been usable as a combination path specification and result filter, which it is now. In other words
(1) dir c:\temp
means you wanted to return children of the container c:\temp
(2) dir c:\temp\*
means you wanted to return children of all containers inside
c:\temp. With (2), you would never get c:\tmp\a.txt returned, since a.txt is not a container.
There are reasons that this was a good idea. The parameter names and filtering behavior was consistent with the evolving PowerShell design standards, and best of all the tool would be straightforward to stub in for use by namespace
providers consistently.
However, this produced a lot of heated discussion. A rational, orthogonal tool would not allow the convenience we get with the dir command for doing things like this:
(3) dir c:\tmp\a*.txt
Possibly more important was the "crash" factor. It's so instinctive for admins to do things like (3) that our fingers do the typing when we list directories, and the instant failure or worse, weird, dissonant output we would get with a more pure Path
parameter is exactly like slamming into a brick wall.
At this point, I get a little fuzzy about the details, but I believe the Get-ChildItem syntax was settled on as a compromise that wouldn't derail people expecting files when they do (3), but would still allow more complex use. I think that this
is done essentially by treating all files as though they are containers for themselves. This saves a lot of pain in basic use, but introduces other pain for advanced use.
This may shed some light on why the tool is a bit twisted, but it doesn't do a lot to help with your particular wrapping problem. You'll almost certainly need to do some more complicated things in attempting to wrap up Get-ChildItem. Can you describe some
details of what your intent is with the wrapper? What kind of searches by what kind of users, maybe? With those details, it's likely people can point out some specific approaches that can give more consistent results.
Maybe you are looking for
-
My wireless local area connection was working fine before I thought I could improve "things" with this latest browser upgrade :-(
-
When I try to update camera raw files in CS6 I get the following error message Photoshop Camera Raw 6.7 Update Installation failed. Error Code: U44M1P7 What do I do?
-
G5 dual 2.0Ghz bought in 2003+ 30" cinema display?
Hi all. I bought the first generation G5 dual 2.0Ghz in 2003 and am sure that I have Radeon 9600 pro video card installed in the computer. Currently, I am thinking of buying 30" cinema display and did some research on it's specification. I found that
-
hello , about object links ,there is no sap object about QM inspection lot ,i want to create it . how to create sap object about QM inspection lot ? thanks!
-
Creative Cloud ToS breaks application interop
I'm noticing that every update comes with another 'accept ToS' prompt and if you haven't accepted and launch from another app or from the context menu from a file, the application launch workflow breaks. Example: New updates trigger ToS. Launch Premi