Extending eDirectory schema for OIF Federation Data Store

Since I couldn't find ldif files specifically for eDirectory (Novell Directory Server), I tried extending the schema using the OpenLDAP and GENERIC schema files. Both attempts failed for file format reasons.
Has anyone been succesfull updating their eDirectory schema?
Thanks

I hope you know that eDirectory is not supported - http://www.oracle.com/technology/software/products/ias/files/idm_certification_101401.html#BABHFBCC
Like any LDAP-compliant server eDirectory has its own idiosyncracies and you might hit issues because OIF using eDir as the federation data store has not been QA-ed by Oracle.
-Vinod

Similar Messages

  • OIF : Federation Data Store

    I have configured RDBMS as federation data Store.
    Now when I navigate to OIF Instance->Administration->Identities->Federated Identities and click on search, I get the following error
    Unable to perform search.
    The following error occured while performing the operation above:
    javax.management.RuntimeMBeanException: An unexpected error occurred while performing the federation record search: oracle.security.fed.admin.search.exceptions.SearchDatastoreException: Error ocurred while querying database
    Also datastore.xml is missing from my OIF server.The RCU is already installed on DB. Please let me know what did I do wrong.
    Regards,
    RA
    Edited by: R_A on Jan 4, 2012 1:23 AM

    I hope you know that eDirectory is not supported - http://www.oracle.com/technology/software/products/ias/files/idm_certification_101401.html#BABHFBCC
    Like any LDAP-compliant server eDirectory has its own idiosyncracies and you might hit issues because OIF using eDir as the federation data store has not been QA-ed by Oracle.
    -Vinod

  • Error Extending eDirectory Schema for Radius in iManager

    I am working on integrating eDirectory with FreeRADIUS on our OES 11 SP2 servers. I have been following all the steps in the "Integrating Novell eDirectory with FreeRADIUS" guide located here: https://www.netiq.com/documentation/edir_radius/. I did not have any problems installing FreeRADIUS or modifying its config files for LDAP authentication.
    I am now stuck trying to extend the eDirectory schema for radius. In iManager, I go to Roles and Tasks --> radius --> Extend Schema, and I keep getting the following error: "RADIUS plugin encountered an error. Click the Details button for more information." When I click "details" it shows the following:
    java.lang.NullPointerException\n at java.util.StringTokenizer.(StringTokenizer.java:88 )\n at java.util.StringTokenizer.(StringTokenizer.java:66 )\n at com.novell.ldap.LDAPConnection.connect(Unknown Source)\n at com.novell.nps.radius.NovellLDAPAuthenticator.logi n(NovellLDAPAuthenticator.java:155)\n at com.novell.nps.radius.ExtendRadiusSchema.showIniti alForm(ExtendRadiusSchema.java:178)\n at com.novell.nps.radius.ExtendRadiusSchema.execute(E xtendRadiusSchema.java:96)\n at com.novell.emframe.dev.Task.execute(Task.java:505) \n at com.novell.nps.gadgetManager.BaseGadgetInstance.pr ocessRequest(BaseGadgetInstance.java:858)\n at com.novell.nps.gadgetManager.GadgetManager.delegat eToGadget(GadgetManager.java:4256)\n at com.novell.nps.gadgetManager.LaunchService.onDeleg ateAction(LaunchService.java:86)\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Nativ e Method)\n at sun.reflect.NativeMethodAccessorImpl.invoke(Native MethodAccessorImpl.java:60)\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(De legatingMethodAccessorImpl.java:37)\n at java.lang.reflect.Method.invoke(Method.java:611)\n at com.novell.nps.gadgetManager.BaseGadgetInstance.ha ndleAction(BaseGadgetInstance.java:2371)\n at com.novell.nps.gadgetManager.GadgetManager.process InstanceRequest(GadgetManager.java:1609)\n at com.novell.nps.gadgetManager.GadgetManager.process ServiceRequest(GadgetManager.java:1062)\n at com.novell.nps.PortalServlet.handleFrameService(Po rtalServlet.java:509)\n at com.novell.nps.PortalServlet.processRequest(Portal Servlet.java:373)\n at com.novell.nps.PortalServlet.doPost(PortalServlet. java:279)\n at com.novell.nps.PortalServlet.doGet(PortalServlet.j ava:262)\n at javax.servlet.http.HttpServlet.service(HttpServlet .java:617)\n at com.novell.emframe.fw.servlet.AuthenticatorServlet .service(AuthenticatorServlet.java:332)\n at javax.servlet.http.HttpServlet.service(HttpServlet .java:717)\n at org.apache.catalina.core.ApplicationFilterChain.in ternalDoFilter(ApplicationFilterChain.java:290)\n at org.apache.catalina.core.ApplicationFilterChain.do Filter(ApplicationFilterChain.java:206)\n at com.novell.emframe.fw.filter.CrossScriptingFilter. doFilter(CrossScriptingFilter.java:25)\n at org.apache.catalina.core.ApplicationFilterChain.in ternalDoFilter(ApplicationFilterChain.java:235)\n at org.apache.catalina.core.ApplicationFilterChain.do Filter(ApplicationFilterChain.java:206)\n at com.novell.emframe.fw.filter.AntiCsrfServletFilter .doFilter(AntiCsrfServletFilter.java:275)\n at org.apache.catalina.core.ApplicationFilterChain.in ternalDoFilter(ApplicationFilterChain.java:235)\n at org.apache.catalina.core.ApplicationFilterChain.do Filter(ApplicationFilterChain.java:206)\n at org.apache.catalina.core.StandardWrapperValve.invo ke(StandardWrapperValve.java:233)\n at org.apache.catalina.core.StandardContextValve.invo ke(StandardContextValve.java:191)\n at org.apache.catalina.authenticator.AuthenticatorBas e.invoke(AuthenticatorBase.java:530)\n at org.apache.catalina.core.StandardHostValve.invoke( StandardHostValve.java:128)\n at org.apache.catalina.valves.ErrorReportValve.invoke (ErrorReportValve.java:102)\n at org.apache.catalina.core.StandardEngineValve.invok e(StandardEngineValve.java:109)\n at org.apache.catalina.connector.CoyoteAdapter.servic e(CoyoteAdapter.java:286)\n at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyo teHandler.java:190)\n at org.apache.jk.common.HandlerRequest.invoke(Handler Request.java:291)\n at org.apache.jk.common.ChannelSocket.invoke(ChannelS ocket.java:769)\n at org.apache.jk.common.ChannelSocket.processConnecti on(ChannelSocket.java:698)\n at org.apache.jk.common.ChannelSocket$SocketConnectio n.runIt(ChannelSocket.java:891)\n at org.apache.tomcat.util.threads.ThreadPool$ControlR unnable.run(ThreadPool.java:690)\n at java.lang.Thread.run(Thread.java:761)\n
    Can anyone give me an idea of what is going on here? Everything I've been able to dig up so far has dealt with schema conflict errors and ssl/tls connection issues. I don't think that is what's going on here. I am getting the same error on multiple servers with eDirectory and iManager installed. Any help is appreciated. Thank you.
    Scot

    Originally Posted by bjunker
    I am working on integrating eDirectory with FreeRADIUS on our OES 11 SP2 servers. I have been following all the steps in the "Integrating Novell eDirectory with FreeRADIUS" guide located here: https://www.netiq.com/documentation/edir_radius/. I did not have any problems installing FreeRADIUS or modifying its config files for LDAP authentication.
    I am now stuck trying to extend the eDirectory schema for radius. In iManager, I go to Roles and Tasks --> radius --> Extend Schema, and I keep getting the following error: "RADIUS plugin encountered an error. Click the Details button for more information." When I click "details" it shows the following:
    Can anyone give me an idea of what is going on here? Everything I've been able to dig up so far has dealt with schema conflict errors and ssl/tls connection issues. I don't think that is what's going on here. I am getting the same error on multiple servers with eDirectory and iManager installed. Any help is appreciated. Thank you.
    Scot
    Seems like there is a know bug for this issue, I suggest you to open a SR if you can?
    Thomas

  • I need to extend the schema for iPlanet Dir. 5.0 and add custom objectclasses and atributes. I do this by adding entries in the 99user.ldif file. Its not working. Any ideas?

    Hi
    I need to extend the schema for iPlanet Dir. 5.0 and I do not want to do so from the console. As per the documentation, I need to either add entries in the 99user.ldif file or define my own custom [00-99]myname.ldif file. I tried this but its not working.
    I have made the assumption that there is no explicit import step for the 'user defined' schema files (as it is for user data ldif files). I assume that on start (or on opening the console), I'd be able to see the new schema after the server has read the schema file.
    I have verified that entering new objectclasses and attributes from the console adds entries into the 99user.ldif file. So why is the reverse process not working. Can anybody throw some light on this? Also in case my assumptions are faulty, please let me know.
    I did not change the aci entries in the existing ldif file. Is any modification needed there? I was logged in as the Directory Manager during this testing process.
    regards
    Sikka ([email protected])

    Hi Sikka,
    The server reads its schema configuration on startup. If you manually modify the schema files while the server is running, it will not have any effect. You have to restart the server.
    The console adds the new schema elements over LDAP (you could do that as well, you only have to modify the cn=schema entry), so the server is aware of the changes immediately and thus restarting is not needed.
    I hope this helps.
    Bertold

  • Please add XSD Schema for validating TLF data in TLF 3.0

    It would be very beneficial to have a XSD schema for validating TLF data.  Please add this to TLF 3.0.  There are a couple of posts where others have already asked for this...
    http://forums.adobe.com/message/2795099#2795099
    http://forums.adobe.com/message/2223205
    Thanks!

    Sure Gang!
    We could use the XML schema to validate the TLF markup that we are generating from our publishing system.  We generate XML files which include the TLF markup and a XML schema would be very beneficial to validate that markup to make sure we are doing everything right.

  • How to extend AD schema for Exchange

    Hi,
    Please can someone help me understand the process for extending AD schema for Exchange?
    I’m concerned because there are some cautions noted generally in the process so i’m looking for best practice to ensure extending in a safe and smooth manor.
    Bit of background on our environment - 
    Moving to Office365 in a hybrid environment, never used Microsoft products for email. 
    New On-prem AD Physical DC, 2012 Server with DirSync to Azure/ O365
    OU’s populated with objects.
    Would like to manage users and distribution groups from within AD on-prem.
    Many thanks,
    Leo.

    It is recommended to have at least two DC/GC servers for HA.
    We are using Office 365 too for 65k + and we do have Powershell scripts to make the attribute changes and identify users with missing or incorrect messaging attributes. So, you might think about using Powershell scripts too.
    65K? That's the largest I've heard yet! Are you in an educational environment? That's what I understand and should have pointed out. We are a university and the first to use O365 consolidating Google, Notes and Mirapoint active mailboxes. We still
    haven't touched Alumni, which are on Google. That would take us to the 75k level, but that discussion is on a backburner.
    Ace Fekay
    MVP, MCT, MCSE 2012, MCITP EA & MCTS Windows 2008/R2, Exchange 2013, 2010 EA & 2007, MCSE & MCSA 2003/2000, MCSA Messaging 2003
    Microsoft Certified Trainer
    Microsoft MVP - Directory Services
    Complete List of Technical Blogs: http://www.delawarecountycomputerconsulting.com/technicalblogs.php
    This posting is provided AS-IS with no warranties or guarantees and confers no rights.

  • Could not start cache agent for the requested data store

    Hi,
    This is my first attempt in TimesTen. I am running TimesTen on the same Linux host (RHES 5.2) that running Oracle 11g R2. The version of TimesTen is:
    TimesTen Release 11.2.1.4.0
    Trying to create a simple cache.
    The DSN entry for ttdemo1 in .odbc.ini is as follows:
    +[ttdemo1]+
    Driver=/home/oracle/TimesTen/timesten/lib/libtten.so
    DataStore=/work/oracle/TimesTen_store/ttdemo1
    PermSize=128
    TempSize=128
    UID=hr
    OracleId=MYDB
    DatabaseCharacterSet=WE8MSWIN1252
    ConnectionCharacterSet=WE8MSWIN1252
    Using ttisql I connect
    Command> connect "dsn=ttdemo1;pwd=oracle;oraclepwd=oracle";
    Connection successful: DSN=ttdemo1;UID=hr;DataStore=/work/oracle/TimesTen_store/ttdemo1;DatabaseCharacterSet=WE8MSWIN1252;ConnectionCharacterSet=WE8MSWIN1252;DRIVER=/home/oracle/TimesTen/timesten/lib/libtten.so;OracleId=MYDB;PermSize=128;TempSize=128;TypeMode=0;OracleNetServiceName=MYDB;
    (Default setting AutoCommit=1)
    Command> call ttcacheuidpwdset('ttsys','oracle');
    Command> call ttcachestart;
    *10024: Could not start cache agent for the requested data store. Could not initialize Oracle Environment Handle.*
    The command failed.
    The following is shown in the tterrors.log:
    15:41:21.82 Err : ORA: 9143: ora-9143--1252549744-xxagent03356: Datastore: TTDEMO1 OCIEnvCreate failed. Return code -1
    15:41:21.82 Err : : 7140: oraagent says it has failed to start: Could not initialize Oracle Environment Handle.
    15:41:22.36 Err : : 7140: TT14004: TimesTen daemon creation failed: Could not spawn oraagent for '/work/oracle/TimesTen_store/ttdemo1': Could not initialize Oracle Environment Handl
    What are the reasons that the daemon cannot spawn another agent? FYI the environment variables are set as:
    ORA_NLS33=/u01/app/oracle/product/11.2.0/db_1/ocommon/nls/admin/data
    ANT_HOME=/home/oracle/TimesTen/ttdemo1/3rdparty/ant
    CLASSPATH=/home/oracle/TimesTen/ttdemo1/lib/ttjdbc5.jar:/home/oracle/TimesTen/ttdemo1/lib/orai18n.jar:/home/oracle/TimesTen/ttdemo1/lib/timestenjmsxla.jar:/home/oracle/TimesTen/ttdemo1/3rdparty/jms1.1/lib/jms.jar:.
    oracle@rhes5:/home/oracle/TimesTen/ttdemo1/info% echo $LD_LIBRARY_PATH
    /home/oracle/TimesTen/ttdemo1/lib:/home/oracle/TimesTen/ttdemo1/ttoracle_home/instantclient_11_1:/u01/app/oracle/product/11.2.0/db_1/lib:/u01/app/oracle/product/11.2.0/db_1/network/lib:/lib:/usr/lib:/usr/ucblib:/usr/local/lib
    Cheers

    Sure thanks.
    Here you go:
    Daemon environment:
    _=/bin/csh
    DISABLE_HUGETLBFS=1
    SYSTEM=TEST
    INIT_FILE=/u01/app/oracle/product/10.1.0/db_1/dbs/init+ASM.ora
    GEN_APPSDIR=/home/oracle/dba/bin
    LD_LIBRARY_PATH=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/ttoracle_home/instantclient_11_1:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/lib:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/ttoracle_home/instantclient_11_1:/u01/app/oracle/product/11.2.0/db_1/lib:/u01/app/oracle/product/11.2.0/db_1/network/lib:/lib:/usr/lib:/usr/ucblib:/usr/local/lib
    HOME=/home/oracle
    SPFILE_DIR=/u01/app/oracle/backup/+ASM/initfile_dir
    TNS_ADMIN=/u01/app/oracle/product/11.2.0/db_1/network/admin
    INITFILE_DIR=/u01/app/oracle/backup/+ASM/initfile_dir
    HTMLDIR=/home/oracle/+ASM/dba/html
    HOSTNAME=rhes5
    TEMP=/oradata1/tmp
    PWD=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/bin
    HISTSIZE=1000
    PATH=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/bin:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/oci:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/odbc:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/odbc/xla:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/jdbc:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/odbc_drivermgr:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/proc:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/ttclasses:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/ttclasses/xla:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/ttoracle_home/instantclient_11_1:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/ttoracle_home/instantclient_11_1/sdk:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/3rdparty/ant/bin:/usr/kerberos/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/bin/X11:/usr/X11R6/bin:/usr/platform/SUNW,Ultra-2/sbin:/u01/app/oracle/product/11.2.0/db_1:/u01/app/oracle/product/11.2.0/db_1/bin:.
    GEN_ADMINDIR=/home/oracle/dba/admin
    CONTROLFILE_DIR=/u01/app/oracle/backup/+ASM/controlfile_dir
    ETCDIR=/home/oracle/+ASM/dba/etc
    GEN_ENVDIR=/home/oracle/dba/env
    DATAFILE_DIR=/u01/app/oracle/backup/+ASM/datafile_dir
    BACKUPDIR=/u01/app/oracle/backup/+ASM
    RESTORE_ARCFILES=/u01/app/oracle/backup/+ASM/restorefile_dir/restore_arcfiles.txt
    TMPDIR=/oradata1/tmp
    CVS_RSH=ssh
    ARCLOG_DIR=/u01/app/oracle/backup/+ASM/arclog_dir
    REDOLOG_DIR=/u01/app/oracle/backup/+ASM/redolog_dir
    INPUTRC=/etc/inputrc
    LOGDIR=/home/oracle/+ASM/dba/log
    DATAFILE_LIST=/u01/app/oracle/backup/+ASM/datafile_dir/datafile.list
    LS_COLORS=no=00:fi=00:di=00;34:ln=00;36:pi=40;33:so=00;35:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:ex=00;32:*.cmd=00;32:*.exe=00;32:*.com=00;32:*.btm=00;32:*.bat=00;32:*.sh=00;32:*.csh=00;32:*.tar=00;31:*.tgz=00;31:*.arj=00;31:*.taz=00;31:*.lzh=00;31:*.zip=00;31:*.z=00;31:*.Z=00;31:*.gz=00;31:*.bz2=00;31:*.bz=00;31:*.tz=00;31:*.rpm=00;31:*.cpio=00;31:*.jpg=00;35:*.gif=00;35:*.bmp=00;35:*.xbm=00;35:*.xpm=00;35:*.png=00;35:*.tif=00;35:
    PS1=rhes5:($ORACLE_SID)$
    G_BROKEN_FILENAMES=1
    SHELL=/bin/ksh
    PASSFILE=/home/oracle/dba/env/.ora_accounts
    LOGNAME=oracle
    ORA_NLS10=/u01/app/oracle/product/11.2.0/db_1/nls/data
    ORACLE_SID=mydb
    APPSDIR=/home/oracle/+ASM/dba/bin
    ORACLE_OWNER=oracle
    RESTOREFILE_DIR=/u01/app/oracle/backup/+ASM/restorefile_dir
    SQLPATH=/home/oracle/dba/bin
    TRANDUMPDIR=/tran
    RESTORE_SPFILE=/u01/app/oracle/backup/+ASM/restorefile_dir/restore_spfile.txt
    RESTORE_DATAFILES=/u01/app/oracle/backup/+ASM/restorefile_dir/restore_datafiles.txt
    ENV=/home/oracle/.kshrc
    SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass
    SSH_CONNECTION=50.140.197.215 62742 50.140.197.216 22
    LESSOPEN=|/usr/bin/lesspipe.sh %s
    TERM=xterm
    GEN_ETCDIR=/home/oracle/dba/etc
    SP_FILE=/u01/app/oracle/product/10.1.0/db_1/dbs/spfile+ASM.ora
    ORACLE_BASE=/u01/app/oracle
    ASTFEATURES=UNIVERSE - ucb
    ADMINDIR=/home/oracle/+ASM/dba/admin
    SSH_CLIENT=50.140.197.215 62742 22
    TZ=GB
    SUPPORT=oracle@linux
    ARCHIVE_LOG_LIST=/u01/app/oracle/backup/+ASM/arclog_dir/archive_log.list
    USER=oracle
    RESTORE_TEMPFILES=/u01/app/oracle/backup/+ASM/restorefile_dir/restore_tempfiles.txt
    MAIL=/var/spool/mail/oracle
    EXCLUDE=/home/oracle/+ASM/dba/bin/exclude.lst
    GEN_LOGDIR=/home/oracle/dba/log
    SSH_TTY=/dev/pts/2
    RESTORE_INITFILE=/u01/app/oracle/backup/+ASM/restorefile_dir/restore_initfile.txt
    HOSTTYPE=i386-linux
    VENDOR=intel
    OSTYPE=linux
    MACHTYPE=i386
    SHLVL=1
    GROUP=dba
    HOST=rhes5
    REMOTEHOST=vista
    EDITOR=vi
    ORA_NLS33=/u01/app/oracle/product/11.2.0/db_1/ocommon/nls/admin/data
    ODBCINI=/home/oracle/.odbc.ini
    TT=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/
    SHLIB_PATH=/u01/app/oracle/product/11.2.0/db_1/lib:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1//lib
    ANT_HOME=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/3rdparty/ant
    CLASSPATH=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/lib/ttjdbc5.jar:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/lib/orai18n.jar:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/lib/timestenjmsxla.jar:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/3rdparty/jms1.1/lib/jms.jar:.
    TT_AWT_PLSQL=0
    NLS_LANG=AMERICAN_AMERICA
    NLS_COMP=ANSI
    NLS_SORT=BINARY
    NLS_LENGTH_SEMANTICS=BYTE
    NLS_NCHAR_CONV_EXCP=FALSE
    NLS_CALENDAR=GREGORIAN
    NLS_TIME_FORMAT=hh24:mi:ss
    NLS_DATE_FORMAT=syyyy-mm-dd hh24:mi:ss
    NLS_TIMESTAMP_FORMAT=syyyy-mm-dd hh24:mi:ss.ff9
    ORACLE_HOME=
    DaemonCWD = /home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/info
    DaemonLog = /home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/info/tterrors.log
    DaemonOptionsFile = /home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/info/ttendaemon.options
    Platform = Linux/x86/32bit
    SupportLog = /home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/info/ttmesg.log
    Uptime = 136177 seconds
    Backcompat = no
    Group = 'dba'
    Daemon pid 8111 port 53384 instance ttimdb1
    End of report

  • Star schema for a uploaded data sheet

    Hi All gurus,
    I am new to this tech . I have a requirement like this , I have to prepare the star schema for this data sheet as below .
    REPORT_DATE     PREPARED_BY     Units On-time     Units Late     Non-Critical On-time     Non-Critical Lates      Non-Critical DK On-time     Non-Critical DK Lates
    2011 -01     Team1     1                         
    2011-02     Team1                              
    2011-03     Team1                              
    2011 -01     Team2                              
    2011-02     Team2                         7     1
    2011-03     Team2                              4 5
    2011 -01     Team3                              
    2011-02     Team3                              
    2011-03     Team3     1                         3
    (Take blank fields as zeros)
    Note : There are 3 report date types 2011-01,02,03 and three teams team 1,2,3 as text data and all others columns contain number data .
    I am given Time as dimensional table containing the Report Date and Whole sheet as Data table . So how to define the relationship for this in Physical and BMM ?
    I am thinking to make Time as Dimensional Table and the whole table(as Data) as a fact table in the Physical layer . And then in the BMM , I want to carve out a Logical Dimension called Group from the Data Physical Table and then make Group and Time as dimensional Table and Data as Fact table .
    Is this approach is correct ? please suggest me and if have any better Idea ,then please note down what are the tables to be taken as Dimension and Fact table in both physical and BMM . Your help willl be appreciated ,so thanks in advance . You can also advice for any change in no of Physical tables in the Physical schema design ..

    Your' s suggestion utterly anticipated ..

  • How to Set a Variable with data from Srouce Data Store

    Hello ODI Experts,
    I have created a Physical & Logical Schema and a Source Data store to pickup data from a database table.
    On the other hand, I have a few variable that I will pass in a web service call (ODIInvokeWebService tool).
    Would yo please guide how I can set variables from my source data store.
    Thanks & Regards,
    Ahsan

    Hello Bos/Damodhar/ODI Experts,
    Doesn't it gives me a less optimized approach picking one column per query (per variable)?
    Lets say, I have to pick 35 columns from a table and put those in 35 variables...It would mean running 35 queries for fetching one record from the database table.
    Doesn't it seem less performance effective (less optimized)..a little scary..any thing better that I can do to make it more optimized?
    Another question, what if multiple new values have come in the DB table, since I am using Refresh Variable, would this variable have multiple values in it?
    Thanks for all your help,
    Ahsan
    Edited by: Ahsan Asghar on 21-Jun-2011 07:46

  • Extending AD schema to sync to Office365

    Hi, we currently are running a fresh AD environment that has never been exposed to Exchange. We are running DirSync to sync AD to Office365 (one way). We're currently unable to manage several attributes due to not ever having an Exchange installation, so
    we simply need to extend our AD schema to add the necessary attributes. This seems to be a somewhat common problem, but there doesn't seem to be any official documentation/procedure for fixing. Here's a few things that really need clarification, for anyone
    looking to extend their schema for Office365 purposes:
    1. Which Exchange installation to use to extend schema?
    2. The objects that were synced to Office365 initially had some of the attributes we're now missing. Should we be concerned about overwriting these attributes with null values after the schema extension? What is the best method to address these concerns?
    Is there a list of attributes provided by the schema extension so we can check what may be overwritten?
    Thanks and please help!

    1. Which Exchange installation to use to extend schema?
    I always used Exchange Server 2013.
    2. The objects that were synced to Office365 initially had some of the attributes we're now missing.
    Should we be concerned about overwriting these attributes with null values after the schema extension? 
    If something is wrong with the sync then you should be able to see it on DirSync after the sync attempt. The list of synced attributes is mentioned here: http://social.technet.microsoft.com/wiki/contents/articles/19901.dirsync-list-of-attributes-that-are-synced-by-the-azure-active-directory-sync-tool.aspx
    Of course, the data are synced from AD to Office 365 so you need to take in consideration that your data will be overwritten. The good approach would be to populate these attributes in AD before running the sync.
    This posting is provided AS IS with no warranties or guarantees , and confers no rights.
    Ahmed MALEK
    My Website Link
    My Linkedin Profile
    My MVP Profile

  • [HELP] Error: "JDBC theme based FOI support is disabled for the given data"

    Hi all,
    I have already set up MapViewer version 10.1.3.3 on BISE1 10g OC4J server. I am current using JDK 1.6. I create a mvdemo/mvdemo user for demo data.
    The MapViewer demo is running fine with some demo without CHART. But give this error with some maps that have CHART like: "Dynamic theme, BI data and Pie chart style", "Dynamic theme and dynamic Bar chart style". The error is:
    ----------ERROR------------
    Cannot process the FOI response from MapViewer server. Server message is: <?xml version=\"1.0\" encoding=\"UTF-8\" ?> <oms_error> MAPVIEWER-06009: Error processing an FOI request.\nRoot cause:FOIServlet:MAPVIEWER-06016: JDBC theme based FOI support is disabled for the given data source. [mvdemo]</oms_error>.
    ----------END ERROR------
    I searched many threads on this forum, some point me to have this allow_jdbc_theme_based_foi="true" in mapViewerConfig.xml and restart MapViewer.
    <map_data_source name="mvdemo"
    jdbc_host="localhost"
    jdbc_sid="bise1db"
    jdbc_port="1521"
    jdbc_user="mvdemo"
    jdbc_password="mvdemo"
    jdbc_mode="thin"
    number_of_mappers="3"
    allow_jdbc_theme_based_foi="true"
    />
    Error Images: [http://i264.photobucket.com/albums/ii176/necrombi/misc/jdbcerror.png|http://i264.photobucket.com/albums/ii176/necrombi/misc/jdbcerror.png]
    I have this configuration, but no luck. Could anyone show me how to resolve this problem?
    Rgds,
    Dung Nguyen

    Oop, i managed to have this prob resolved!
    My prob may come from this I use both scott and mvdemo schema for keeping demo data (import demo data).
    Steps i made to resolve my prob are:
    1) Undeploy MapViewer from Application Server Control (http://localhost:9704/em in my case)
    2) Drop user mvdemo
    3) Download mapviewer kit from Oracle Fusion Middleware MapViewer & Map Builder Version 11.1.1.2
    4) Deploy MapViewer again
    5) Recreate mvdemo and import demo data
    6) Run mcsdefinition.sql, mvdemo.sql with mvdemo user (granted dba already)
    7) Edit mapViewerConfig.xml
    <map_data_source name="mvdemo"
    jdbc_host="dungnguyen-pc"
    jdbc_sid="bise1db"
    jdbc_port="1521"
    jdbc_user="mvdemo"
    jdbc_password="!mvdemo"
    jdbc_mode="thin"
    number_of_mappers="3"
    allow_jdbc_theme_based_foi="true"
    />
    Save & Restart MapViewer
    And now, all demos run fine. Hope this helpful for who meet prob like my case.

  • TtRestoring a Data Store w/ a Large PermSize to a Smaller PermSize

    If have a backup of a data store created by ttBackup. I'd like to restore that data store into a data store with a smaller PermSize. The original data should fit in the new data store. However, I'm encountering errors doing this.
    I've done the following:
    - Ran ttBackup on the source data store.
    - The source data store has a PermSize of 16 Gb, but it's backup file is about 4.5 Gbytes.
    - On the target machine, I created an entry in the odbc.ini for the new data store, but with a PermSize=6144
    - On the target machine, I then run ttIsql target_dsn_name
    - This works okay and ipcs shows a shared memory segment with a size that correlates to the PermSize=6144.
    - Then that data store is ttDestroy'ed to get it out of the way.
    - Next I ran: ttRestore -fname source_dsn_name -connstr "DSN=target_dsn_name;Preallocate=1;PermSize=6144;TempSize=120" -dir . -noconn
    - This succeeds, but only because "-noconn" was specified.
    - When I 1st try to connect to the datastore by running: ttIsql "DSN=target_dsn_name;PermSize=6144;TempSize=120"
    - It fails with the following error:
    836: Cannot create data store shared-memory segment, error 22
    703: Subdaemon connect to data store failed with error TT836
    - The tterror.log contains:
    19:59:05.54 Err : : 3810: TT14000: TimesTen daemon internal error: Error 22 creating shared segment, KEY 0x0401a8b2
    19:59:05.54 Err : : 3810: -- OS reports invalid shared segment size
    19:59:05.54 Err : : 3810: -- Confirm that SHMMAX kernel parameter is set > datastore size
    19:59:05.54 Err : : 3820: subd: Error identified in [sub.c: line 3188]
    19:59:05.54 Err : : 3820: subd: (Error 836): TT0836: Cannot create data store shared-memory segment, error 22 -- file "db.c", lineno 9342, procedure "sbDbConnect"
    19:59:05.54 Err : : 3820: file "db.c", lineno 9342, procedure "sbDbConnect"
    19:59:05.54 Warn: : 3820: subd: connect trouble, rc 1, reason 836
    19:59:05.54 Err : : 3820: Err 836: TT0836: Cannot create data store shared-memory segment, error 22 -- file "db.c", lineno 9342, procedure "sbDbConnect"
    19:59:05.54 Err : : 3810: TT14000: TimesTen daemon internal error: Could not send 'manage' request to subdaemon rc 400 err1 703 err2 836
    19:59:06.45 Warn: : 3810: 3820 ------------------: subdaemon process exited
    Note that on the target machine total shared memory is configured for only 10 Gb, which is smaller than the size of the original data store.

    Hi Brian,
    ttRestore cannot be used for this purpose. The restored datastroe will always have the PermSize that was in effect when it was backed up. You cannot shrink an existing datastore. If you need to move the tables and data to a store with a smaller PermSize (assuming they will fit of course) then you need to use ttMigrate instead.
    Chris

  • How create data store with PermSize = 4096MB on HP-UX 64-bit?

    Hi!
    I use TimesTen 7.0.2:
    TimesTen Release 7.0.2.0.0 (64 bit HPUX/IPF) (tt70_1:17001) 2007-05-02T05:22:15Z
    Instance admin: root
    Instance home directory: /opt/TimesTen/tt70_1
    Daemon home directory: /var/TimesTen/tt70_1
    Access control enabled.
    I set PermSize = 4096MB for my new data store. Then I tryid to create it:
    ttIsql -connStr "DSN=tt_rddb1;UID=ttsys;PWD=ttsys;OraclePWD=ttsys;Overwrite=1" -e "exit;"
    But operation was failed:
    836: Cannot create data store shared-memory segment, error 22.
    Can I create data store with such size on HP-UX 64-bit???

    Is largefiles enabled? I believe you can check with fsadm -F vxfs /filesystem
    Also please understand that 'PermSize' is not the only attribute affecting the size of the timesten shared memory segment. The actual resulting size is
    PermSize + TempSize + LogBuffSize + Overhead
    So you would need to configure shmmax to be > 4g. Have you tried setting it to (say) 8g (just for testing purposes to see if it eliminates the error).

  • Extending AD Schema (Unity 4.0(2))

    We are currently running Unity 4.0(2) with Exchange 2000 as a partner. We need to upgrade to the latest shipping Unity version and reconfigure Unity to communicate with Exchange 2003. Do we need to extend AD schema again? If so, what is the difference between extending the schema for Exchange 2000 and Exchange 2003? Will this not cause duplicate entries in AD?

    Hi Andy,
    http://www.cisco.com/univercd/cc/td/doc/product/voice/c_unity/rug/ex/ru_110.htm#wp1359133
    Yes, extend the schema again, On Cisco Unity DVD 1 or CD 1, or from the location to which you saved the downloaded Cisco Unity CD 1 image files, browse to the directory ADSchemaSetup, and double-click ADSchemaSetup.exe
    To see the changes that the schema update program makes, browse to the directory Schema\LdifScripts on Cisco Unity DVD 1 or CD 1, and view the file Avdirmonex2k.ldf.
    Schema Changes in Exchange Server 2003
    http://www.microsoft.com/technet/prodtechnol/exchange/guides/WhatNewE2k3/aa9bc812-6f7f-4892-8bf0-06f5eff204bb.mspx?mfr=true
    Upgrading from Exchange 2000 Server to Exchange Server 2003
    http://www.microsoft.com/technet/prodtechnol/exchange/guides/Ex2k3DepGuide/428e1090-c8a4-492c-9079-92ff2b588d55.mspx?mfr=true
    Main differences between Exchange 2000 and 2003.
    Improved security, including all those of IIS v 6.0.
    HTTP over RPC means you do not need to configure a VPN for OWA.
    Up to 8 node Active / Passive clustering.
    Volume Shadow Copy for backup.
    Super upgrade tools like ExDeploy.
    pfMigrate utility to move public folders from legacy systems.
    An attempt to control Junk email both on the client and the server.
    Upgrade should go fine and no duplicate entries will be created.
    HTH
    //G

  • Endeca Data Store Repository , Commit interval

    Hi All,
    I have been using endeca 2.3 for quite some time now and has successfully developed a studio application. But following are the few questions still I fail to answer:
    1. Where will the data store reside? Location of the data store records in the server. In which format are the records stored on the MDEX engine. Can we have access to those records?
    Most frequent question from my demonstrations: Where will be the data stored when we load data to a data store.
    2. Is there any thing like commit interval frequency? i.,e when I have 10,000 records to be loaded and in case my graph loads 3k records and fails due to some exception then I need to reload the data from scratch. I dont have any point where in I can resume the graphs.
    So as per my understanding there is no commit interval for data loading on to the data store.
    Can any one throw some light on the same??
    Thanks in Advance,
    Kartik P.

    Hi Kartik,
    To answer your questions:
    1. Where will the data store reside?
    A: Once the source data is loaded into the Oracle Endeca Server, the data turns into an Endeca Server index (also referred to in the doc as files on disk). The Location of the data store in the Endeca Server is: C:\Oracle\EndecaServer<vesion>\endeca-server\data\<data_store_name>_indexes, assuming you installed in the C directory on Windows.
    Q: In which format are the records stored on the MDEX engine. Can we have access to those records?
    A: Records are stored in the internal binary format and there is no outside access to these records; they are for the Endeca Server consumption.
    Also, please see the doc about using the endeca-cmd command, for managing the data store, in the Endeca Server Administrator's Guide:
    http://docs.oracle.com/cd/E29805_01/server.230/es_admin/src/cadm_cmd_root.html
    2. Q: Is there any thing like commit interval frequency? i.,e when I have 10,000 records to be loaded and in case my graph loads 3k records and fails due to some exception then I need to reload the data from scratch. I dont have any point where in I can resume the graphs.So as per my understanding there is no commit interval for data loading on to the data store.
    A: For this, the closest answer is to bundle your data updates (data loads) inside a single outer transaction. If you do so, the entire update commits or fails as a unit. For information on running transaction graphs, you can see the Endeca Information Discovery Integrator Components Guide, and a section on the outer transactions in the Endeca Server Developer's Guide: http://docs.oracle.com/cd/E29805_01/server.230/es_dev/src/cdg_txnWS_root.html
    The basic idea is to create a graph in the Integrator that starts an outer transaction, and then run several updating operations within this graph. To make things work, you need to make sure the outer transaction ID is specified correctly in all operations inside this graph, and also not to start more than one outer transaction graph at a time (only one outer transaction operation can run at at time inside the Endeca Server. It must be committed or rolled back before another outer transaction can be started).
    Here is a quote from the Integrator Components guide:
    "An outer transaction is a set of operations performed in the Oracle Endeca Server data store that is viewed as a single unit. If an outer transaction is committed, this means that all of the data and configuration changes made during the transaction have completed successfully and are committed to the data store index.
    If any of the changes made within a transaction fail to complete successfully, the outer transaction fails to commit and remains open (only one outer transaction can be open at a time). In this case, you can roll back the entire transaction, and the changes to the data store index do not occur.
    In general, the best practice is to set up operations so that successful updates are automatically committed (this is the default), but failed updates can be rolled back either automatically or manually."
    Hope this helps,
    Julia

Maybe you are looking for

  • I have acquired a  old G5 MAXC and need to make it wireless?

    I have acquired a  old G5 MAXC and need to make it wireless?

  • NIS is blocking my Itunes...Help

    I tried to go step by step of Lorraine M fix, but I cant seem to be able to add any files, it just says open...Help if possible?

  • Big Bang 4-in-a-Row Virus?

    The other day, I exported a blog from blogspot, creating an xml file on my desktop. When I opened the xml file, a game called "Big Bang 4-in-a-Row" launched on my Mac. Thankfully, I was able to close the application easily, but the way this unsuspect

  • NEW iBook and doesn´t wake up after sleep mode!

    Hi, I´ve just started to use my new iBook G4 3 days ago, added my user information, and for a little while the wireless internet connection. Then, I closed the display to put it on sleep mode.....and.............it DOESN´T WAKE UP!!! The hard disk se

  • Problem installing Oracle 9i entreprise edition

    I have downloaded oracle 9i entreprise edition (all 3 disks). When I click on the set up icon, nothing happen. Could someone give me advises on how to fix the problem. I am running windows XP Professional at home. This is my first time using oracle.