Schema support for RFC2307 (LDAP as NIS)

Does anyone know of a way to easily import the schema from RFC2307 into OID? Anyone have a bulkload I can get to handle this?
I've started putting the entries in by hand but it is getting tedious, I was hoping another kind soul has done the dirty work

Hello Matthew:
Here are the schema definitions for RFC2307: If you would rather have the files send me an email and Ill send you the files.
Also, I would not use bulkload for just adding these schema extentions. Its overkill. Just copy the schema definitions into a file and use an ldapmodify command like this:
ldapmodify -h your_host_name -p 389 -D cn=orcladmin -w your_password -v -f /tmp/NISschema.ldi
# Beginning of LDIF file
# This file contains all the schema elements required for use of LDAP as
# Netowrk Information Service. The schema is based on RFC 2307.
# These definitions are subject to change as and when the RFC is updated.
# Contact: Saurabh Shrivastava ([email protected]) for issues related to
# these schema definitions.
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.0 NAME 'uidNumber'
DESC 'An integer uniquely identifying a user in an
administrative domain'
EQUALITY integerMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.27' SINGLE-VALUE )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.1 NAME 'gidNumber'
DESC 'An integer uniquely identifying a group in an
administrative domain'
EQUALITY integerMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.27' SINGLE-VALUE )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.2 NAME 'gecos'
DESC 'The GECOS field; the common name'
EQUALITY caseIgnoreIA5Match
SUBSTRINGS caseIgnoreIA5SubstringsMatch
SYNTAX '1.3.6.1.4.1.1466.115.121.1.26' SINGLE-VALUE )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.3 NAME 'homeDirectory'
DESC 'The absolute path to the home directory'
EQUALITY caseExactIA5Match
SYNTAX '1.3.6.1.4.1.1466.115.121.1.26' SINGLE-VALUE )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.4 NAME 'loginShell'
DESC 'The path to the login shell'
EQUALITY caseExactIA5Match
SYNTAX '1.3.6.1.4.1.1466.115.121.1.26' SINGLE-VALUE )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.5 NAME 'shadowLastChange'
EQUALITY integerMatch
SYNTAX '1.3.6.1.4.1.1466.115.121.1.27' SINGLE-VALUE )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.6 NAME 'shadowMin'
EQUALITY integerMatch
SYNTAX '1.3.6.1.4.1.1466.115.121.1.27' SINGLE-VALUE )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.7 NAME 'shadowMax'
EQUALITY integerMatch
SYNTAX '1.3.6.1.4.1.1466.115.121.1.27' SINGLE-VALUE )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.8 NAME 'shadowWarning'
EQUALITY integerMatch
SYNTAX '1.3.6.1.4.1.1466.115.121.1.27' SINGLE-VALUE )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.9 NAME 'shadowInactive'
EQUALITY integerMatch
SYNTAX '1.3.6.1.4.1.1466.115.121.1.27' SINGLE-VALUE )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.10 NAME 'shadowExpire'
EQUALITY integerMatch
SYNTAX '1.3.6.1.4.1.1466.115.121.1.27' SINGLE-VALUE )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.11 NAME 'shadowFlag'
EQUALITY integerMatch
SYNTAX '1.3.6.1.4.1.1466.115.121.1.27' SINGLE-VALUE )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.12 NAME 'memberUid'
EQUALITY caseExactIA5Match
SUBSTRINGS caseExactIA5SubstringsMatch
SYNTAX '1.3.6.1.4.1.1466.115.121.1.26' )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.13 NAME 'memberNisNetgroup'
EQUALITY caseExactIA5Match
SUBSTRINGS caseExactIA5SubstringsMatch
SYNTAX '1.3.6.1.4.1.1466.115.121.1.26' )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.14 NAME 'nisNetgroupTriple'
DESC 'Netgroup triple'
SYNTAX '1.3.6.1.4.1.1466.115.121.1.26' )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.15 NAME 'ipServicePort'
EQUALITY integerMatch
SYNTAX '1.3.6.1.4.1.1466.115.121.1.27' SINGLE-VALUE )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.16 NAME 'ipServiceProtocol' SUP name )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.17 NAME 'ipProtocolNumber'
EQUALITY integerMatch
SYNTAX '1.3.6.1.4.1.1466.115.121.1.27' SINGLE-VALUE )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.18 NAME 'oncRpcNumber'
EQUALITY integerMatch
SYNTAX '1.3.6.1.4.1.1466.115.121.1.27' SINGLE-VALUE )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.19 NAME 'ipHostNumber'
DESC 'IP address as a dotted decimal, eg. 192.168.1.1,
omitting leading zeros'
EQUALITY caseIgnoreIA5Match
SYNTAX '1.3.6.1.4.1.1466.115.121.1.26{128}' )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.20 NAME 'ipNetworkNumber'
DESC 'IP network as a dotted decimal, eg. 192.168,
omitting leading zeros'
EQUALITY caseIgnoreIA5Match
SYNTAX '1.3.6.1.4.1.1466.115.121.1.26{128}' SINGLE-VALUE )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.21 NAME 'ipNetmaskNumber'
DESC 'IP netmask as a dotted decimal, eg. 255.255.255.0,
omitting leading zeros'
EQUALITY caseIgnoreIA5Match
SYNTAX '1.3.6.1.4.1.1466.115.121.1.26{128}' SINGLE-VALUE )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.22 NAME 'macAddress'
DESC 'MAC address in maximal, colon separated hex
notation, eg. 00:00:92:90:ee:e2'
EQUALITY caseIgnoreIA5Match
SYNTAX '1.3.6.1.4.1.1466.115.121.1.26{128}' )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.23 NAME 'bootParameter'
DESC 'rpc.bootparamd parameter'
SYNTAX '1.3.6.1.4.1.1466.115.121.1.26' )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.24 NAME 'bootFile'
DESC 'Boot image name'
EQUALITY caseExactIA5Match
SYNTAX '1.3.6.1.4.1.1466.115.121.1.26' )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.26 NAME 'nisMapName' SUP name )
dn: cn=subschemasubentry
changeType: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.1.1.1.27 NAME 'nisMapEntry'
EQUALITY caseExactIA5Match
SUBSTRINGS caseExactIA5SubstringsMatch
SYNTAX '1.3.6.1.4.1.1466.115.121.1.26{1024}' SINGLE-VALUE )
dn: cn=subschemasubentry
changeType: modify
add: objectClasses
objectClasses: ( 1.3.6.1.1.1.2.0 NAME 'posixAccount' SUP top AUXILIARY
DESC 'Abstraction of an account with POSIX attributes'
MUST ( cn $ uid $ uidNumber $ gidNumber $ homeDirectory )
MAY ( userPassword $ loginShell $ gecos $ description ) )
dn: cn=subschemasubentry
changeType: modify
add: objectClasses
objectClasses: ( 1.3.6.1.1.1.2.1 NAME 'shadowAccount' SUP top AUXILIARY
DESC 'Additional attributes for shadow passwords' MUST uid
MAY ( userPassword $ shadowLastChange $ shadowMin
shadowMax $ shadowWarning $ shadowInactive $
shadowExpire $ shadowFlag $ description ) )
dn: cn=subschemasubentry
changeType: modify
add: objectClasses
objectClasses: ( 1.3.6.1.1.1.2.2 NAME 'posixGroup' SUP top STRUCTURAL
DESC 'Abstraction of a group of accounts' MUST ( cn $ gidNumber )
MAY ( userPassword $ memberUid $ description ) )
dn: cn=subschemasubentry
changeType: modify
add: objectClasses
objectClasses: ( 1.3.6.1.1.1.2.3 NAME 'ipService' SUP top STRUCTURAL
DESC 'Abstraction an Internet Protocol service. Maps an IP port and protocol (such as tcp or udp)
to one or more names; the distinguished value of the cn attribute denotes the service's canonical
name' MUST ( cn $ ipServicePort $ ipServiceProtocol ) MAY ( description ) )
dn: cn=subschemasubentry
changeType: modify
add: objectClasses
objectClasses: ( 1.3.6.1.1.1.2.4 NAME 'ipProtocol' SUP top STRUCTURAL DESC 'Abstraction of an IP protocol. Maps a protocol number
to one or more names. The distinguished value of the cn attribute denotes the protocol's canonical name'
MUST ( cn $ ipProtocolNumber $ description ) MAY description )
dn: cn=subschemasubentry
changeType: modify
add: objectClasses
objectClasses: ( 1.3.6.1.1.1.2.5 NAME 'oncRpc' SUP top STRUCTURAL DESC 'Abstraction of an Open Network Computing (ONC)
[RFC1057] Remote Procedure Call (RPC) binding. This class maps an ONC RPC number to a name.
The distinguished value of the cn attribute denotes the RPC service's canonical name'
MUST ( cn $ oncRpcNumber $ description ) MAY description )
dn: cn=subschemasubentry
changeType: modify
add: objectClasses
objectClasses: ( 1.3.6.1.1.1.2.6 NAME 'ipHost' SUP top AUXILIARY DESC 'Abstraction of a host, an IP device. The distinguished
value of the cn attribute denotes the host's canonical name. Device SHOULD be used as a structural class'
MUST ( cn $ ipHostNumber ) MAY ( l $ description $ manager ) )
dn: cn=subschemasubentry
changeType: modify
add: objectClasses
objectClasses: ( 1.3.6.1.1.1.2.7 NAME 'ipNetwork' SUP top STRUCTURAL DESC 'Abstraction of a network. The distinguished value of
the cn attribute denotes the network's canonical name' MUST ( cn $ ipNetworkNumber )
MAY ( ipNetmaskNumber $ l $ description $ manager ) )
dn: cn=subschemasubentry
changeType: modify
add: objectClasses
objectClasses: ( 1.3.6.1.1.1.2.8 NAME 'nisNetgroup' SUP top STRUCTURAL DESC 'Abstraction of a netgroup. May refer to other netgroups'
MUST cn MAY ( nisNetgroupTriple $ memberNisNetgroup $ description ) )
dn: cn=subschemasubentry
changeType: modify
add: objectClasses
objectClasses: ( 1.3.6.1.1.1.2.09 NAME 'nisMap' SUP top STRUCTURAL DESC 'A generic abstraction of a NIS map'
MUST nisMapName MAY description )
dn: cn=subschemasubentry
changeType: modify
add: objectClasses
objectClasses: ( 1.3.6.1.1.1.2.10 NAME 'nisObject' SUP top STRUCTURAL DESC 'An entry in a NIS map'
MUST ( cn $ nisMapEntry $ nisMapName ) MAY description )
dn: cn=subschemasubentry
changeType: modify
add: objectClasses
objectClasses: ( 1.3.6.1.1.1.2.11 NAME 'ieee802Device' SUP top AUXILIARY DESC 'A device with a MAC address; device SHOULD be
used as a structural class' MAY macAddress )
dn: cn=subschemasubentry
changeType: modify
add: objectClasses
objectClasses: ( 1.3.6.1.1.1.2.12 NAME 'bootableDevice' SUP top AUXILIARY DESC 'A device with boot parameters; device SHOULD be
used as a structural class' MAY ( bootFile $ bootParameter ) )
# End of LDIF file
Give me email if you have any trouble getting this schema loaded into OID.
Thanks,
Jay
null

Similar Messages

  • Prime Infrastructure 2.0: Open Database Schema Support available?

    Hello,
    I searched the forum for an Open Database Schema Support for PI.
    I only found Open Database Schemas for the Cisco Works LAN Managemet Solutions and Cisco Prime LAN Managemet Solutions.
    Exists an open Database Schema Support for Prime Infrastructure 2.0 as it exists for the older LAN Managemet Solutions ?
    Thanks.
    Bastian

    Programmatic interface to Prime Infrastructure is done via the the REST API vs. any open database schema.
    There are published reference guides here. For the most up to date information, please see your PI server itself. In Lifecycle view, the help menu has a link to the server-based API information:

  • Motif L&F: support for color schemes?

    I want to use the Motif L&F in my Java application, but I want the color scheme to be chosen as per the users installed Color theme(using the CDE Style Manager).
    Having a default bluish grey UI does not blend well with my 'Desert' or 'Delphinium' color theme... how can I achieve this in my java app with minimal code?
    Thanks

    Doesnt look like Sun provided support for Themes in
    Motif L&F...
    I tried the SwingSet demo on Solaris, and when Motif
    L&F is chosen, the Theme drop down gets disabled...Got Mantis? The GTK L&F in 1.4.2 supports themes. Otherwise, you might be able to do what you want by subclassing MotifLookAndFeel--just that class, I mean, not the whole L&F. You would actually be creating a new L&F that uses all of the Motif UI delegate classes, but replacing its color scheme with your own. I've never tried it, so it might not be that simple--and if it takes any more effort than that, it's not worth it. Not with the GTK L&F already out there, and the Synth L&F coming in 1.5.
    and, as an aside, the SwingSet font in the 'view
    source' pane is extremely small and unreadable on
    Solaris... looks like sloppy workmanship!
    It's the same on Windows. I believe it's due to a known bug that causes the HTML renderer to make Swing's already-too-small fonts, even smaller.

  • Support for XML schema by Sun's parser

    Does Sun's ProjectX XML parser proovide support for XML schema.

    The JAXP conformance documentation does not mention about XML Schemas, therefore it does not yet support it. If you are looking for an XML parser that supports XML Schema validation, you can download Multi Schema Validator (MSV) from http://www.sun.com. Xerces 2.0.0 beta 3 mentions support for XML Schema. I have not tested it though. Hope this helps.

  • NLS ISO88595 support for ldap OID C API

    Please Help!
    How to order from Oracle Internet Directory C API to process single byte
    string (ISO-8859-5) instead of
    Unicode string ( in input and output parameters values ) ?
    #include <ldap.h>
    char* base ="cn=�������_�����, cn=com"; /* in ISO */
    ldap_search_s(ld , base .... ); /* not detecting base with russian word */
    How to switch NLS ISO support for ldap OID C API ?
    If any other approach to solve it problem?
    (for example in Oracle OCI C API it solved by setting client NLS_LANG
    environment variable .
    In my case NLS_LANG not working)

    UP plz

  • Adding ldap support for postfix

    using darwinbuild
    I have read several discussion:
    http://discussions.apple.com/thread.jspa?messageID=1240972
    http://discussions.apple.com/thread.jspa?messageID=658174&#658174
    in reading these i noted that I have to create the directory
    /Appleinternal/Developer/Headers and place the appropriate headers in there.
    This is where I'm stuck looking for the right headers for LDAP.
    smtp1:/usr/include root# ls | grep ldap
    ldap.h
    ldap_cdefs.h
    ldap_features.h
    ldap_schema.h
    ldap_utf8.h
    Are these the header files i need to place in the Developers/ldap directory?
    What All needs to be in here?
    here is the build from the postfix makefile
    build :
    echo "ENV = $(ENV)"
    $(ENV) $(MAKE) -C $(SRCROOT)/$(PROJECT) makefiles OPT="-DBIND8COMPAT -DHAS_SSL -DUSESASLAUTH -D_APPLE_ \
    -I/AppleInternal/Developer/Headers/sasl -framework DirectoryService $(CFLAGS)" \
    AUXLIBS="-L/usr/lib -lssl -lsasl2.2.0.1 -lgssapi_krb5"
    $(ENV) $(MAKE) -C $(SRCROOT)/$(PROJECT)
    the ldap book 'O'reilly" pg. 148 to enable client support for postfix;
    CCARGS="-I/usr/include -DHAS_LDAP" \
    AUXLIBS ="-L/usr/local/lib -lldap -llber "
    i get the same info from http://www.postfix.org/LDAP_README.html
    i do not see lldap and llber in /usr/lib anwhere. do these files get created in the build or do I have to find them? are these the current options?
    TIA
    -j

    1. Make sure you do not forget the sasl headers.
    okay, sasl in addtion to ldap information.
    2. Did you recompile LDAP, or just use the one that
    was there on your Tiger default install?
    No i did not recompile ldap. Although the thought did cross my mind when i saw that my ldap version was 2.2.19
    If you did
    re-compile, make sure you used the one from darwin
    sources.
    Darwin source is 2.2.19
    Current version on OpenLDAP site is 2.3.20

  • Support for LDAP

    Hello,
    My question is quite simple : do you plan to support LDAP for Kodo ?
    Regards,
    Dom

    We don't have a timeline for LDAP support. Of course, we offer an
    AbstractStoreManager to aid you in adding your own support for other
    data stores. There is a complete sample for a simple XML store in the
    Kodo download. If you'd like to talk to us about contracting for
    official LDAP support, contact [email protected]

  • Special characters not supported in Embedded LDAP

    Hi All,
    I had a very hectic time trying to debug this issue.
    The requirement was to provide support for + as a special character in the userId.
    As the RFC says to escape it using a backslash.I did exactly that.
    However, it kept on giving me Naming Violation... LDAP error code 64.
    SO, inorder to verify the code which I had writted ... I connected the Apache Directory Server in place.
    This time round the code worked.
    Can someone help me with the resolution ... as in, does the Embedded LDAP schema needs modification.... apparently it does.
    Thanks & Regards
    Yukti Kaura

    Thanks !
    How do we raise a support issue .Is there any Id where I can drop a mail ?
    Yukti

  • Set Up Native LDAP without NIS?

    The Blueprint documents covering native LDAP support for Solaris 8 are very NIS-centric. I do not have NIS configured for my workgroup, and I do not wish to create the whole NIS domain just to turn around and import it into LDAP.
    The 'dsimport' command information says:
    "The dsimport command takes a text file in /etc/files format as input. Typically, you use the same files that are maintained to generate your NIS maps." Okay, I'm game...
    When I attempt to use the dsimport command, it first core-dumped without doing anything. I obtained a newer version from patch 106621-09, but this one insists on finding the current
    /etc/opt/SUNWconn/ldap/current/dsserv.conf, which is the NIS-to-LDAP agent daemon. Not a good solution!
    What am I missing here? I don't relish the idea of keying every detail through the Directory Console. This was supposed to make management easier, not harder!

    I'm not should your problem coming from. Did you apply the Solaris Extensions for Netscape Directory Server 4.X patch 109953-02. If not, try to apply it first. Because original version have a lot of unknown problem.
    For download, goto http://www.iplanet.com/downloads/patches/ & find
    Solaris Extensions for Netscape Directory Server 4.X patch 109953-02
    Lucas

  • Selective XML Index feature is not supported for the current database version , SQL Server Extended Events , Optimizing Reading from XML column datatype

    Team , Thanks for looking into this  ..
    As a last resort on  optimizing my stored procedure ( Below ) i wanted to create a Selective XML index  ( Normal XML indexes doesn't seem to be improving performance as needed ) but i keep getting this error within my stored proc . Selective XML
    Index feature is not supported for the current database version.. How ever
    EXECUTE sys.sp_db_selective_xml_index; return 1 , stating Selective XML Indexes are enabled on my current database .
    Is there ANY alternative way i can optimize below stored proc ?
    Thanks in advance for your response(s) !
    /****** Object: StoredProcedure [dbo].[MN_Process_DDLSchema_Changes] Script Date: 3/11/2015 3:10:42 PM ******/
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    -- EXEC [dbo].[MN_Process_DDLSchema_Changes]
    ALTER PROCEDURE [dbo].[MN_Process_DDLSchema_Changes]
    AS
    BEGIN
    SET NOCOUNT ON --Does'nt have impact ( May be this wont on SQL Server Extended events session's being created on Server(s) , DB's )
    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
    select getdate() as getdate_0
    DECLARE @XML XML , @Prev_Insertion_time DATETIME
    -- Staging Previous Load time for filtering purpose ( Performance optimize while on insert )
    SET @Prev_Insertion_time = (SELECT MAX(EE_Time_Stamp) FROM dbo.MN_DDLSchema_Changes_log ) -- Perf Optimize
    -- PRINT '1'
    CREATE TABLE #Temp
    EventName VARCHAR(100),
    Time_Stamp_EE DATETIME,
    ObjectName VARCHAR(100),
    ObjectType VARCHAR(100),
    DbName VARCHAR(100),
    ddl_Phase VARCHAR(50),
    ClientAppName VARCHAR(2000),
    ClientHostName VARCHAR(100),
    server_instance_name VARCHAR(100),
    ServerPrincipalName VARCHAR(100),
    nt_username varchar(100),
    SqlText NVARCHAR(MAX)
    CREATE TABLE #XML_Hold
    ID INT NOT NULL IDENTITY(1,1) PRIMARY KEY , -- PK necessity for Indexing on XML Col
    BufferXml XML
    select getdate() as getdate_01
    INSERT INTO #XML_Hold (BufferXml)
    SELECT
    CAST(target_data AS XML) AS BufferXml -- Buffer Storage from SQL Extended Event(s) , Looks like there is a limitation with xml size ?? Need to re-search .
    FROM sys.dm_xe_session_targets xet
    INNER JOIN sys.dm_xe_sessions xes
    ON xes.address = xet.event_session_address
    WHERE xes.name = 'Capture DDL Schema Changes' --Ryelugu : 03/05/2015 Session being created withing SQL Server Extended Events
    --RETURN
    --SELECT * FROM #XML_Hold
    select getdate() as getdate_1
    -- 03/10/2015 RYelugu : Error while creating XML Index : Selective XML Index feature is not supported for the current database version
    CREATE SELECTIVE XML INDEX SXI_TimeStamp ON #XML_Hold(BufferXml)
    FOR
    PathTimeStamp ='/RingBufferTarget/event/timestamp' AS XQUERY 'node()'
    --RETURN
    --CREATE PRIMARY XML INDEX [IX_XML_Hold] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index
    --SELECT GETDATE() AS GETDATE_2
    -- RYelugu 03/10/2015 -Creating secondary XML index doesnt make significant improvement at Query Optimizer , Instead creation takes more time , Only primary should be good here
    --CREATE XML INDEX [IX_XML_Hold_values] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index , --There should exists a Primary for a secondary creation
    --USING XML INDEX [IX_XML_Hold]
    ---- FOR VALUE
    -- --FOR PROPERTY
    -- FOR PATH
    --SELECT GETDATE() AS GETDATE_3
    --PRINT '2'
    -- RETURN
    SELECT GETDATE() GETDATE_3
    INSERT INTO #Temp
    EventName ,
    Time_Stamp_EE ,
    ObjectName ,
    ObjectType,
    DbName ,
    ddl_Phase ,
    ClientAppName ,
    ClientHostName,
    server_instance_name,
    nt_username,
    ServerPrincipalName ,
    SqlText
    SELECT
    p.q.value('@name[1]','varchar(100)') AS eventname,
    p.q.value('@timestamp[1]','datetime') AS timestampvalue,
    p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') AS objectname,
    p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') AS ObjectType,
    p.q.value('(./action[@name="database_name"]/value)[1]','varchar(100)') AS databasename,
    p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') AS ddl_phase,
    p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') AS clientappname,
    p.q.value('(./action[@name="client_hostname"]/value)[1]','varchar(100)') AS clienthostname,
    p.q.value('(./action[@name="server_instance_name"]/value)[1]','varchar(100)') AS server_instance_name,
    p.q.value('(./action[@name="nt_username"]/value)[1]','varchar(100)') AS nt_username,
    p.q.value('(./action[@name="server_principal_name"]/value)[1]','varchar(100)') AS serverprincipalname,
    p.q.value('(./action[@name="sql_text"]/value)[1]','Nvarchar(max)') AS sqltext
    FROM #XML_Hold
    CROSS APPLY BufferXml.nodes('/RingBufferTarget/event')p(q)
    WHERE -- Ryelugu 03/05/2015 - Perf Optimize - Filtering the Buffered XML so as not to lookup at previoulsy loaded records into stage table
    p.q.value('@timestamp[1]','datetime') >= ISNULL(@Prev_Insertion_time ,p.q.value('@timestamp[1]','datetime'))
    AND p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') ='Commit' --Ryelugu 03/06/2015 - Every Event records a begin version and a commit version into Buffer ( XML ) we need the committed version
    AND p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
    AND p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
    AND p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') <> 'Replication Monitor' --Ryelugu : 03/09/2015 We do not want any records being caprutred by Replication Monitor ??
    SELECT GETDATE() GETDATE_4
    -- SELECT * FROM #TEMP
    -- SELECT COUNT(*) FROM #TEMP
    -- SELECT GETDATE()
    -- RETURN
    -- PRINT '3'
    --RETURN
    INSERT INTO [dbo].[MN_DDLSchema_Changes_log]
    [UserName]
    ,[DbName]
    ,[ObjectName]
    ,[client_app_name]
    ,[ClientHostName]
    ,[ServerName]
    ,[SQL_TEXT]
    ,[EE_Time_Stamp]
    ,[Event_Name]
    SELECT
    CASE WHEN T.nt_username IS NULL OR LEN(T.nt_username) = 0 THEN t.ServerPrincipalName
    ELSE T.nt_username
    END
    ,T.DbName
    ,T.objectname
    ,T.clientappname
    ,t.ClientHostName
    ,T.server_instance_name
    ,T.sqltext
    ,T.Time_Stamp_EE
    ,T.eventname
    FROM
    #TEMP T
    /** -- RYelugu 03/06/2015 - Filters are now being applied directly while retrieving records from BUFFER or on XML
    -- Ryelugu 03/15/2015 - More filters are likely to be added on further testing
    WHERE ddl_Phase ='Commit'
    AND ObjectType <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
    AND ObjectName NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
    AND T.Time_Stamp_EE >= @Prev_Insertion_time --Ryelugu 03/05/2015 - Performance Optimize
    AND NOT EXISTS ( SELECT 1 FROM [dbo].[MN_DDLSchema_Changes_log] MN
    WHERE MN.[ServerName] = T.server_instance_name -- Ryelugu Server Name needes to be added on to to xml ( Events in session )
    AND MN.[DbName] = T.DbName
    AND MN.[Event_Name] = T.EventName
    AND MN.[ObjectName]= T.ObjectName
    AND MN.[EE_Time_Stamp] = T.Time_Stamp_EE
    AND MN.[SQL_TEXT] =T.SqlText -- Ryelugu 03/05/2015 This is a comparision Metric as well , But needs to decide on
    -- Peformance Factor here , Will take advise from Lance if comparision on varchar(max) is a vital idea
    --SELECT GETDATE()
    --PRINT '4'
    --RETURN
    SELECT
    top 100
    [EE_Time_Stamp]
    ,[ServerName]
    ,[DbName]
    ,[Event_Name]
    ,[ObjectName]
    ,[UserName]
    ,[SQL_TEXT]
    ,[client_app_name]
    ,[Created_Date]
    ,[ClientHostName]
    FROM
    [dbo].[MN_DDLSchema_Changes_log]
    ORDER BY [EE_Time_Stamp] desc
    -- select getdate()
    -- ** DELETE EVENTS after logging into Physical table
    -- NEED TO Identify if this @XML can be updated into physical system table such that previously loaded events are left untoched
    -- SET @XML.modify('delete /event/class/.[@timestamp="2015-03-06T13:01:19.020Z"]')
    -- SELECT @XML
    SELECT GETDATE() GETDATE_5
    END
    GO
    Rajkumar Yelugu

    @@Version : ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Microsoft SQL Server 2012 - 11.0.5058.0 (X64)
        May 14 2014 18:34:29
        Copyright (c) Microsoft Corporation
        Developer Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: ) (Hypervisor)
    (1 row(s) affected)
    Compatibility level is set to 110 .
    One of the limitation states - XML columns with a depth of more than 128 nested nodes
    How do i verify this ? Thanks .
    Rajkumar Yelugu

  • No Support for Windows 7! Extremely Frustrated! Help!

    Ok, so here's the deal. I have TX1220us tablet pc. It came with Vista 32-bit Home Premium on it. Everybody knows that Vista is a resource hog, and running the 32-bit version on a 2.2 gig 64-bit processor is a complete waste of performance capabilities. So, what's the ideal thing to do? Upgrade to Windows 7 Ultimate of course! Not so easy... After upgrading, almost nothing works properly on my computer. I did this 8-10 months ago, and immediately noticed the problems. My CD/DVD Lightscribe Drive only functions partially, it only reads some discs and won't burn anything. The fingerprint sensor doesn't function at all. Worst of all, the digitizer (touchscreen) is no longer touch sensitive. The quick launch buttons werent working for a while, but I installed some vista drivers, and luckily was able to get 2 of the 4 to work.
    So what do I do? I talk to HP Support for help. Unfortunately, the only "support" that I could get out of them was in the form of a few sentences. "Unfortunately, HP does not recommend upgrading from the pre-installed operating system. HP does not support anything but the pre-installed operating system" and "This is something that HP is currently developing, and we will update the support page as soon as updates are available for your hardware". The irony of this all is this: At the exact time that HP gave me the first statement, there was an ad with a link from HP on the right margin of my models support page stating "The Wait for Windows 7 is Over! Upgrading your computer has never been easier!".
    Again this was 8-10 months ago that I went through all of this, and now I finally decided to give it another try. You'd think in 10 months that they would be able to develop some sort of updates for customers. Wrong... I once again received the exact same responses with absolutely no sense of concern or care for the happiness of a customer from the support representative. I have finally decided that this is some sort of marketing scheme or sales scam to force their customers to go out and buy a new computer, even though there is no need for it. My laptop is 300x faster on Win7 x64 than it was on Vista, and after paying for an expensive new operating system for a tablet that i purchased only 3 years ago for $1,600 I have no intent on going out and purchasing a new one just so I can run Windows 7.
    Does anybody know who to talk to about getting issues like this resolved or to get compensation for HP's complete lack of care for customer satisfaction? Any info would be nice. Finally, if anybody has come up with a way to get this system working properly on Win 7, I'm open for suggestions. I have tried installing HP Vista drivers on my system, but when I run the setup packages, they exit out saying that the setup programs are designed for windows vista, that it has detected a different operating system, and that the setup will now exit.
    Help!
    tx1220us - Windows 7 Ultimate x64 - Nothing but problems

    I think you would be very surprised at how many of your drivers are supported natively by Windows 7. I have an even newer AMD laptop which I installed W7 on and had to install very few  drivers. Running Windows Update manually from Control Panel and choosing to also see 'Optional' updates gave me those.
    ******Clicking the Thumbs-Up button is a way to say -Thanks!.******
    **Click Accept as Solution on a Reply that solves your issue to help others**

  • Support for virtual IP in WKA list

    I am attempting to implement Coherence Grid Edition 3.5.3 Patch 7 . It's running on Sun Java 1.6.06 on a Solaris 5.10 machine (which I will call machine A1) which has an ip address 1.2.3.261. Also, there is a virtual IP address 1.2.3.260 which points to this same machine A1, as well as a hostname that maps to the virtual IP address of the machine A1.
    (Note: I have mocked up the IP addresses in this post for security reasons).
    I am using WKA due to concerns of the local network admins regarding multicast (debatable, but for now let's please assume I need to go with WKA).
    If this production machine A1 develops any problem, we can quickly switch to an alternate machine A2. The alternate machine A2 contains automatically replicated copies of all of our important sub-directories (including the coherence xml configuration files), but it does have a different IP.
    During a failure of machine A1, we would shut down machine A1, start up machine A2, and reroute the virtual IP to alternate machine A2's true IP. Machine A2 would be supplied a copy of the file-system that A1 had when A1 was shutdown.
    We would seek to not have to edit any configuration files on A2. It is difficult to maintain alternate configuration files using our current replication scheme, and any machine-specific customizations are problematic.
    Therefore, we would prefer to list the virtual IP of machines A1 and A2 in the WKA list, rather than the true IP of these machines.
    However, when I attempt to list the virtual IP in the WKA list...
                   <well-known-addresses>
                        <socket-address id="1">
                        <address>1.2.3.260</address> <!--virtual ip -->
                        <port>8088</port>
                        </socket-address>
                   </well-known-addresses>
    ... I get the error below.
    2010-09-03 12:01:51.464/1.401 Oracle Coherence GE 3.5.3/465p7 <D5> (thread=Cluster, member=n/a): Service Cluster joined the cluster with senior service member n/a
    2010-09-03 12:01:54.760/4.697 Oracle Coherence GE 3.5.3/465p7 <Error> (thread=Cluster, member=n/a): Node myhostname/1.2.3.261:8088 is not allowed to create a new cluster; WKA list: [1.2.3.260:8088]
    2010-09-03 12:01:54.760/4.697 Oracle Coherence GE 3.5.3/465p7 <D5> (thread=Cluster, member=n/a): Service Cluster left the cluster
    Exception in thread "main" java.lang.RuntimeException: Failed to start Service "Cluster" (ServiceState=SERVICE_STOPPED, STATE_ANNOUNCE)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.start(Service.CDB:38)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.start(Grid.CDB:38)
    at com.tangosol.coherence.component.net.Cluster.onStart(Cluster.CDB:395)
    at com.tangosol.coherence.component.net.Cluster.start(Cluster.CDB:11)
    at com.tangosol.coherence.component.util.SafeCluster.startCluster(SafeCluster.CDB:3)
    at com.tangosol.coherence.component.util.SafeCluster.restartCluster(SafeCluster.CDB:7)
    at com.tangosol.coherence.component.util.SafeCluster.ensureRunningCluster(SafeCluster.CDB:27)
    at com.tangosol.coherence.component.util.SafeCluster.start(SafeCluster.CDB:2)
    at com.tangosol.net.CacheFactory.ensureCluster(CacheFactory.java:998)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(DefaultConfigurableCacheFactory.java:905)
    at com.tangosol.net.DefaultCacheServer.start(DefaultCacheServer.java:139)
    at com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:60)
    2010-09-03 12:01:54.763/4.700 Oracle Coherence GE 3.5.3/465p7 <Error> (thread=main, member=n/a): Error while starting cluster: java.lang.RuntimeException: Failed to start Service "Cluster" (ServiceState=SERVICE_STOPPED, STATE_ANNOUNCE)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.start(Service.CDB:38)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.start(Grid.CDB:38)
    at com.tangosol.coherence.component.net.Cluster.onStart(Cluster.CDB:395)
    at com.tangosol.coherence.component.net.Cluster.start(Cluster.CDB:11)
    What exactly is being checked here? Does each Coherence instance expect to see it's own IP listed in the WKA list, and refuse to run if it can't find it's own IP on the list?
    Is the instance detecting it's own IP via an operating system call, or is there some way I can influence the instance's sense of what IP it is running on?
    Is there any advice for supporting this kind of virtual IP scenario?
    I have tried listing the hostname with no better success, because the hostname points to the virtual IP, and therefore has the same effect as listing the virtual IP.
    As a workaround, I have altered the WKA list to list of the true IPs of both machines...
                   <well-known-addresses>
                        <socket-address id="1">
                        <address>1.2.3.261</address> <!-- A1 -->
                        <port>8088</port>
                        </socket-address>
                        <socket-address id="2">
                        <address>1.2.3.262</address> <!-- A2 -->
                        <port>8088</port>
                        </socket-address>
                   </well-known-addresses>
    ... even though both machines will never be up and running at the same time. This workaround avoids the error and the cluster appears to come up. I am not sure if there's any ill effect that at any given time one of these two IPs will never be up and running.
    Despite my workaround, true support for virtual IPs would be superior; listing all the true non-virtual IPs in the WKA list means re-editing the list as the hardware changes.
    Thanks for any advice!
    P.S. I tried the above with Coherence 3.5.2 with the same results.
    Edited by: user11114413 on Sep 3, 2010 11:45 AM

    Hi user11114413,
    The issue you are seeing actually has little to do with VIP, and more to do with there being multiple IP addresses for us to choose from on your box. For such multi-IP boxes, you'll want to tell us the IP to use, and in your case you want to tell as a VIP. This can be done either by editing your operational configuration file, and including an <address> element within the <unicast-listener> element, or via the tangosol.coherence.localhost system property. For example:
    <unicast-listener>
        <well-known-addresses>
            <socket-address id="1">
                <address>1.2.3.260</address> <!--virtual ip -->
                <port>8088</port>
            </socket-address>
        </well-known-addresses>
        <address>1.2.3.260</address> <!--virtual ip -->
        <port>8088</port>
    </unicast-listener>or
    java ... -Dtangosol.coherence.localhost=1.2.3.260If you are using the same operational configuration on all nodes in your cluster then the system property approach is likely preferable, and would only be necessary on the two machines sharing the VIP.
    As for using VIP or an extended WKA list, the choice is yours, either will work. If you do go the VIP route, it would obviously be a very bad idea to simultaneously use the same VIP and port at the same time from the two machines.
    thanks,
    Mark
    Oracle Coherence

  • ALDSP 3.0 -- schema owner for stored procedure or SQL Statement

    Using ALDSP, I have a need to create a physical service based on a stored procedure or a SQL statement. I am wondering what will happen when I move to another deployment environment where the schema owner changes. In our QA and Prod environments, we have a different schema owner for all tables in the application (the DBAs believe this prevents unwanted updates to a prod environment). DSP elegantly supports this for normal table- and view-based physical services by mapping schemas through the DSP console after deployment. Will I get the same type of mapping capability for stored procedures and SQL statements? I noticed that I can add a SQL-based function to a physical service...is there a way to pass in the physical table name from that data service to the procedure or SQL statement?
    Thanks,
    Jeff

    Schema name substitution should work for stored procedures just like it does for tables. If it doesn't - report a bug.
    You don't get any help for sql-statement based data services - dsp doesn't parse the sql provided. One thing you could do is use the default schema (following the user of your connection pool), and not specify the schema in your sql-statement.

  • Support for PL/SQL Record

    We have a procedure in a package,this procedure makes use of a Record Type.
    In BPEL, whiile creating the partner link, when we are trying to access this procedure we get the following error.
    "WSDLException:faultCode=OTHER_ERROR:Database type is either not supported or is not implemented.
    Parameter L_ECO_REC is of type SAN_REC_TYPE which is either not supported or not an implemented data type.
    Check to ensure that the type of the parameter is one of the supported datatypes or that there is a collection or user defined type definition representing this type defined in the database.contact oracle support if error is not fixable."
    Also please find below the package specification and body:
    CREATE OR REPLACE package eco_pack is
    TYPE San_Rec_Type IS RECORD
    ( Eco_Name VARCHAR2(10)
    , Change_Notice_Prefix VARCHAR2(10)
    , Change_Notice_Number NUMBER
    , Organization_Code VARCHAR2(3)
    PROCEDURE sandeep(l_eco_rec IN eco_pack.San_Rec_Type);
    end;
    CREATE OR REPLACE package body eco_pack is
    PROCEDURE createeco(l_eco_rec IN eco_pack.San_Rec_Type) AS
    BEGIN
    INSERT
    INTO ag_log
    VALUES(1, 'ECO name is' || l_eco_rec.eco_name, 1);
    COMMIT;
    END;
    end;
    Does BPEl support PL/SQL Record Type, if so how ?
    Thanks,
    Shivram
    .

    PL/SQL types like RECORD, BOOLEAN and TABLE are not supported in the DB adapter with the current BPEL PM release (as the error message indicates). You can use JPublisher manually to generate an OBJECT type that corresponds with the RECORD. JPublisher will also create a wrapper and conversion APIs to convert between the two. You would then call the wrapper API which takes the OBJECT and then calls the underlying PL/SQL that takes the RECORD.
    With the 10.1.2 Phase 2 release of BPEL PM, the DB adapter does all of this for you from within the design time wizard. JPublisher is invoked silently under the covers, the SQL that gets generated is automatically loaded into the database schema. That will create the OBJECT type, wrapper and conversion APIs. An XSD is generated for the wrapper API. Your partner link will invoke the wrapper, not the original API.
    Note also that support for BOOLEAN and TABLE was also added. JPublisher generates wrapper APIs that substitute appropriate types for these parameters (e.g. INTEGER for BOOLEAN, Nested Table for TABLE).

  • Support for DbXml specific functionality in XQuery vs. Shell or API

    Is / will there be any support for doing common commands in pure XQuery rather than just programmatically or through the shell? For example, in the shell I can create / list / delete indexes on a container, output query plans, etc. Are the same functions that are called through the shell available as XQuery functions? possibly in a dbxml function namespace?
    In a related question, but possibly deserving of a new thread if it doesn't already have one - I know XQuilla has the ability to call registered external functions from parsed XQuery, is there a way to tell DbXml to register a function with XQuilla's static context before using it? that would make it possible for me to add the functionality described above myself without disturbing your distributed code.

    Not so handy with C++ (my everday languages are PHP and Java), but if you say it can be done then I'll take a hack at it. If you happen to have examples of how other people have done it and you could point me at them it would be marvelous.
    To answer your second question: "bingo." It mostly means less interface work and brings it closer to what SQL can do in relational databases.
    The project allows the end user to define their own document structures (schema definitions more or less) and then create instances of these documents in a XML editing interface. For each document type, the end user is also able to define a set of named queries (abstracted function declarations) which lets us captures business rules without customizing our PHP code. Because the structure and queries that will be used against the documents are user defined its fairly impossible for me to automatically setup adequate indexes for the container of each set of documents. So I need an interface to allow the user to create / review / delete the indexes themselves.
    We've already created an interface that allows the user to execute arbitrary queries against a selected document. In the future we'd also like this same interface to do result set based content updates (through the XmlModify class) and whole document addition / replacement. So inclusion of index control makes sense as well. We can find ways to use the APIs, it just seems like this could benefit more than just our project.
    Placing the functionality from the shell into xquery extension functions seems analogous to having the UPDATE, DELETE and CREATE syntaxes of SQL.

Maybe you are looking for

  • Need to delete /usr/bin without using terminal

    In a bit of a bind. All how-tos detailing how to delete or alter the /usr/bin file involve terminal, but terminal won't work b/c of this error login: PAM Error (line 396): System error login: Could not determine audit condition [Process completed] I

  • Parse xml as string, not file

    I have to parse an xml file i get as a string, what;s the best way to parse it w/o converting to file. then i need just to get the elements , what's the most simple way and which parser to use. my task woudl be aggregation. say i have several element

  • Importing just got very slow!

    I've been running itunes great for the past year, all of a sudden I'm going from 10x-12x speed importing to 1.2x-2x. What's the deal? I've run every adware, spyware and virus scan I have and everything checks out hunky dorry. I'm too impatient to wai

  • Have to Display Transfer orders in horizontal format in script outputlayout

    Hi, I had a problem, I have to display Transfer Orders in Horizontal format in Script Out put layout. For this i need a logic for displaying in horizontal way.. Can you suggest anything ? Thanks a lot .....

  • Need help on displaying the popular items.

    Hi All, I have a requirement to work on display the popular items Could you please help me how to full fill this requirement . Thank you, Regards, Jyothi