Advice on Storage Backend

We're about ready to deploy our first Oracle VM infrastructure.  While our budget allows only 1 SAN; we will add another in next 12 months.  Currently we're debating the use of either:
A - OmniOS w/ ZFS + Napp-IT (experienced on...)
B - Oracle OCFS (Not much experience)
That being said, which Storage Backend is advised + which transport method?  iSCSI or NFS?
Thanks, Joe

Hi,
I don't have any idea about the OmniOS w/ZFS ...
but i will explain to you the difference between iSCSI and NFS from Oracle VM Server.
NFS: is a commonly used file-based storage system that is very suitable for the installation of Oracle VM storage repositories.
     Since most of these resources are rarely written to but are read frequently, NFS is ideal for storing these types of resources.
     Since mounting an NFS share can be done on any server in the network segment to which NFS is exposed, it is possible not only to share NFS storage between servers of the same pool but also across different server pools.
    =>      In terms of performance, NFS is slower for virtual disk I/O compared to a logical volume or a raw disk. This is due mostly to its file-based nature.
For better disk performance you should consider using block-based storage, which is supported in Oracle VM in the form of iSCSI or Fibre Channel SANs
iSCSI:  With Internet SCSI, or iSCSI, you can connect storage entities to client machines, making the disks behave as if they are locally attached disks.
     =>     Performance-wise an iSCSI SAN is better than file-based storage like NFS and it is often comparable to direct local disk access. Because iSCSI storage is attached from a remote server it is perfectly suited for a clustered server pool configuration where high availability of storage and the possibility to live migrate virtual machines are important factors.
So, To create a Backup/snapshot for your VM, is recommended to use the OCFS2 file system (with iSCSI) to take a snapshot with the OCFS2 reflink based on OCFS2. and then move your snaphot to an other repository based on NFS.
I hope this can help you
Best Regards

Similar Messages

  • Advice on storage space with hosts.

    im new to the creating a website and hosting it through a company. i know that .mac has its website hosting but im curious as to if that is a decent one to host through. does it give enough space for data transfer and all that? or are there better hosts out there. what is a good amount of space to have? im currently debating on using iweb or RapidWeaver to create my site. i came across a hosting site that ive seen reccomended but im not sure if its good enough. heres the link:
    http://littleoak.net/index.html
    im really leaning toward RapidWeaver because of how its more custimizable. and Little Oak has specific hosting for it. Any ideas to help me out?

    If Radidweaver does what you need, go for it, but I would fool around with the demo first and make sure I like it..
    The host your refer to does not seem to give you much storage space, if I understand it correctly.
    Don't know how large you site will be, but even 1500 meg might not do it.
    This day and age hosts generally talk in gigs not megs. Again, if I understand what they are offering.
    Take a look at a host like godaddy and compare what they offer.

  • Storage Thread Count

    I'm trying to write up advice for Storage node thread count.
    Obviously, this is dependent of the machines - we have 48G 8processor machines, running 20 x2G Storage nodes and 2 x 2G Proxy nodes.
    My understand is that when you don't have a cache-store you would set the thread-count to 0 (run on service thread).
    I'm wondering how one might calculate this if you have a cache store ( and whether its different if you have write through / behind ).
    I can see that in write-through it will spend most of the time with the database and a higher thread count would be good (its a pity its not a variable size pool with min and max).
    I'm not sure about write-behind - I can imagine that a low thread count might be reasonable here.
    I'm sure Rob (or others ) have a good formula for this.
    Best, Andrew.

    I have a brief recollection (may be wrong about this) that there is some info (via JMX) about average thread pool usage that may be of assistance (if the average is close to your specified max it may pay off to increase it furthe and see if throughput goes up or down, if the average is close to zero it may be just as well to use zero thread count rather than what is used at the moment).
    One must also decide what is most important - to max out throughput or to limit median response time. With a larger thread pool more queries can be in progress at the same time (hopefully reducing median responce time since long runing queries dont lock out other queries) but resources will be split between them (at least if the thread count is larger than the number of cores) making the total throughput lower...
    /Magnus

  • RAC Storage Options ( Openfiler or Wasabi )

    Hi RAC gurus..
    I need your advice regarding storage options for our RAC development environment.
    Since it is a development environment so management is reluctant to invest in SAN , Our production is a 2 node RAC on windows 2003 server mounted on Dell SAN.
    We have two spare servers available with Intel Xeon processors and SAS disks on them. SAN is out of question here and since we are on windows we cannot use NAS.
    I was reviewing the storage options and came across to Openfiler and Wasabi Storage Builder softwares which can transform your server ( spare ones ) into virtual SANs for shared storage purposes.
    I want to know if its reliable, scalable and whether it can be put to production one day or not? Which one is better Openfiler or Wasabi ?

    whether it can be put to production one day or not?
    you need to ask this to oracle support to get a correct answer ; moreover for a production system, I will only use certified hardware .

  • Central storage solution for video & digital content

    I'm looking for advice on storage solutions for a small scale (audio) recording studio which is expanding into video and post production.
    There is a simple wired/wireless network connected via a simple Netgear wireless/ethernet router to which two MacPros (ethernet), one iMac (wireless) and one MacBook (wireless) are connected.
    Currently audio is recorded onto 10,000rpm drives in the main MacPro and then archived onto SATA drives in a firewire Wiebetech traydock system. There is also an Apple Time Capsule which backups up the other computers.
    The system is rather fragmented and messy and I'm looking to upgrade to a simpler and more resilient system for managing all this. I've been using Mac for a long time but haven't kept abreast of central storage solutions.
    Advice on a simple central storage solution would be appreciated. Thanks.

    One thing I'm unsure about is the difference between Firewire 800 and Ethernet as connection protocols between a computer and an external drive or RAID. I've seen "Networked" external storage devices that look quite simple ( eg IOMEGA StorCenter ix2-200 Network-Attached Storage - 4TB)
    Are both suitable for connecting an external drive/RAID, providing adequate bandwidth for uncompressed HD video editing?
    With a standard 4 year old MacPro and ethernet routed via my Netgear Wireless Router, what would the bandwidth be?
    Thanks for any answers to these basic questions.

  • Storage Disappeared! Sort of...

    Hello all,
    I'm running a single host instance of VDI3.0 on a Sun Fire x2200. The storage backend is a Unified Storage 7110. I recently updated the 7110 to the latest firmware version - after which, VDI tells me that my storage is unresponsive. The system shows me as having no storage whatsoever, but yet I am typing this forum post from a virtual machine that was booted and running from the "missing" storage pool.
    As a result of this madness, I am unable to import and boot any new virtual machines. I attempted to add a new storage backend to the system, and when I put in the credentials of the 7110, it does not present the system with a certificate. I can, however, ssh directly into the storage from the VDI provider host.
    I appreciate any insight, because I am totally at a loss on this one.
    Thanks,
    Bob Prendergast

    Hi Bob.
    I suppose you use the 2009.Q2.0.0 version of the 7110 software? The SSH ciphers offered by the SSHD of that release have not a single match in the ciphers list of the SSH client of VDI 3, therefore all SSH connections fail. This issue is currently fixed for the upcoming VDI 3 patch to be released end of this month.
    If you are really in the urgent need of a workaround - and I can't emphasize enough that this workaround isn't even sanity tested yet, so apply it on your own peril - you can download the 1.41 version of the jsch library from http://prdownloads.sourceforge.net/jsch/jsch-0.1.41.jar?download. Shutdown the VDI service on every VDI host ("cacaoadm stop --force"), replace old library at /opt/SUNWvda/lib/jsch-0.1.39.jar with the new one (you need to rename the new library to match the name of the old library) and restart the VDI services again ("cacaoadm start").
    ~Thomas

  • Migration to SDK 2.1, from HWC to IIS

    Hello,
    I have an azure webrole running on SDK 1.8, it has a handful WCF services and a common data access component to access an Azure SQL database and Azure Storage. It was running using the Hosted Web Core (HWC) - no Site definitions as I do not host websites
    on it - so far this worked really fine for the last two years.
    Now I wanted to update it to SDK 2.1 (which should be latest for VS2010) and I am facing several issues:
    I learned that hosting the WCF services via HWC is no longer possible since SDK 2.0 and I have to use IIS.
    This leads to the problem that the webrole is running under a different process as the webservices.
    So far I have created a common component that is initialized when the webrole was started. It read the configuration, which contained the parameters to access the SQL database and the storage. This component was used by the webservices to read/write data
    to database and storage.
    Now with IIS, the initialization only happens for the process of the webrole. This leaves the webservices with a non-initialized data access component.
    To the questions:
    1: What would be the best practice to access a SQL database and the storage from the webservices. These have been completely stateless so far, but now they each would need to read the access parameters for the database and storage.
    I am lacking a defined entry point there to initialized the component.
    2: Why was HWC hosting removed? For my scenario (Webservices with database/storage backend) I see no benefits of using IIS (just a lot of issues for migrating and redesigning the architecture).
    Any advice would be helpful, also an explanation why HWC was removed (so I can understand it).

    Hello Will, thanks for the answer,
    Yes, I am using an Azure hosted SQL database. I would like to retain having the sql server connection parameters in the webrole configuration instead of the web.config. That way I can react to the role configuration changes and don't need to recycle the
    role when switching databases. Same when storage settings change.
    I have found the article you posted about IIS and changes from HWC as well, but I found no explanation why the possibility hosting a webservice HWC style was removed.
    Did it cause problems or issues or was it a strategic change to only go the IIS way? So far I did not have many issues hosting webservices without IIS (in contrast to experiences with IIS, which did not always go smoothly).
    I am now rewriting the data access component to initialize itself lazy-loading style on the 1st access (even if I am no big fan of that pattern, I usually prefer to have a well-defined initialization at startup).
    Anyway, I know I can't go back because sooner or later the older SDK versions will become unsupported. Learning why HWC was removed would help to understand things better.
    Maybe this post helps others to identify migration issues early without a lot of searching.
    Thanks for the help.

  • Bridge failed to restart...

    Hi all,
    I'm using OpenMQ (the version provide with glassfish 3.0.1
    Here, the output of the version I run :
    ================================================================================
    Open Message Queue 4.4
    Oracle
    Version: 4.4 Update 2 (Build 5-a)
    Compile: Fri May 14 23:24:45 PDT 2010
    Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved.
    ================================================================================
    com.sun.messaging.jmq Version Information
         Product Compatibility Version:          4.4
         Protocol Version:               4.4
         Target JMS API Version:               1.1
    Java Runtime Version: 1.6.0_12 Sun Microsystems Inc. /usr/lib/jvm/java-6-sun-1.6.0.12/jre
    I've successfully created and run a bridge configuration routing message between two OpenMQ broker (cluster on both side) running same version.
    The storage backend is JavaDB.
    When I start the first time the cluster broker on my side (which is here to consume) what is produce of the other side, all is running smoothly : I receive all messages.
    For debugging purpose, I need to stop the bridge on my side in order to let message in the distant topic, redeploy the application on my side (MDB) and restart the bridge, to get the message flow.
    So I stop the bridge by using the imqbridgemgr stop bridge -bn xxx command.
    When I restart it (using imqbridgemgr start bridge -bn xxxx command), I get this error :
    glassfish@web2:~/glassfishv3/mq/bin$ ./imqbridgemgr -u admin -passfile ~/.passfile start bridge -bn c2b2_to_c1
    Starting the bridge where:
    Bridge Name
    c2b2_to_c1
    On the broker specified by:
    Host Primary Port
    localhost 7676
    Are you sure you want to start this bridge ? (y/n)[n] y
    Error while performing this operation on the broker.
    [B3100]: Unexpected Broker Internal Error : [Unable to retry database operation]
    An attempt was made to get a data value of type 'java.sql.Types 12' from a data value of type 'java.io.InputStream'.
    Starting the bridge failed.
    And it will failed all the time. The only way I can restart all things, is by recreating all the broker tables.
    Any idea ?
    Thx in advance for your advices/answers.
    Thierry

    This is a bug (6973877, derby only) which has been fixed in 4.5 build14 and 4.4u2p1. Thanks for reporting. The problem is the missing "FOR BIT DATA" in LOG_RECORD column in derby table definition MQTMLRJMSBG41 in broker default.properties file
    - LOG_RECORD VARCHAR(2048) NOT NULL,\
    + LOG_RECORD VARCHAR(2048) FOR BIT DATA NOT NULL,\

  • Do I need a Singleton pattern for this case

    Hello,
    I'm writing a game in Java and I have a very simple score manager class which just tracks the points the player so far has. I need to access this class in different pars of my app (gui, game core ...) so I created a singleton class which looks like this
    public class ScoreManager {
            private static ScoreManager instance = new ScoreManager();
            private int score = 0;
         private int highScore = 0;
         public static ScoreManager getInstance() {
              return instance;
         public int getScore() {
              return score;
         public int getHighScore() {
              return highScore;
         public void addScore(int scoreToAdd){
              score += scoreToAdd;
              if(score > highScore) {
                   highScore = score;
    }so far so good ..
    I would like to read the "highScore" from a file when the game starts and write it back when the game ends. I added those two methods:
         public void init(File highScoreFile) {
              highScore = readFromFile(highScoreFile);
                    score = 0;               
         public void dispose(File highScoreFile) {
                   writeToFile(highScoreFile);
         }So basically I call the init() method when the game stars and the dispose() when the game ends.
    It works but what I don't like is that the init() and dispose() methods are exposed and someone could call them in a wrong place.
    What would be a better way to do this ?
    Thanks for your help,
    Jesse

    safarmer wrote:
    You could keep track of the state (initialised, destroyed etc) in the manager and only perform the action if it is an expected state.
    private enum State { NOT_INITIALISED, INITIALISED, DESTROYED};
    private State currentState = State.NOT_INITIALISED;
    // i will leave the rest up to your imagination :) this looks good, thanks
    anotherAikman wrote:
    >
    It works but what I don't like is that the init() and dispose() methods are exposed and someone could call them in a wrong place. And who would that be? You´re the only one using the code, aren´t you?
    If not, you could still include in the documentation where to call those methods.
    no I'm not the only one working on this. Documentation can be useful but does not prevent calling wrong methods.
    YoungWinston wrote:
    I don't see any constructor. Usually, a singleton class should have a private one, even if it has no parameters. If you don't have any, Java will create a public no-parameter one as default.ok I forgot the private constructor.
    It works but what I don't like is that the init() and dispose() methods are exposed and someone could call them in a wrong place. Then my advice would be not to make them public. After all, your code is the only one calling these methods - yes?yes only the code of the app calls it.
    If you are convinced that your game requires one and only one score manager, then a singleton is probably the best way to go.
    I'm a little worried about the init() and dispose() methods though, because these suggest state changes, which is unusual for the singleton pattern. You may want to think about synchronizing them.
    An alternative might be to use properties to get and store your scores.ok for the synchronization. What would using the properties ? It would be just another type of storage backend and I'd still need to read/write it.
    Thanks for your help,
    J

  • What's the difference between a web site and a web application?

    I'm stumped trying to come up to a difference between a web site and a web application for myself. As I see it, a web site points to a specific page and a web application is more of some sort of 'portal' to content and information.
    But where I'm stuck is that a web application is still viewed through a browser (is it not?) and a web site can still view content dynamically, making the line between web site and application prety gray.
    For instance, does a web site using ASP.NET or AJAX (I assume ASP.NET is AJAX's proprietary sibling, if not, ignore ASP.NET AND concentrate on the AJAX), becomes a web application because it can retrieve data dynamically and asynchronously or would a website
    using PHP and a CMS be more of a web application because it forms the pages on request, based on the request of the client and its content in its databse?
    Or maybe I'm totally wrong here - what differenciates between a web application and a website?
    http://support.peopleperhour.com/entries/68630566--C-mon-lets-Watch-The-Other-Woman-2014-full-movie-online-free
    https://glossicom.zendesk.com/entries/68643806--%D0%BC-v%C9%AA%C9%9Bc-%C3%A4st-Watch-The-Lucy-2014-full-movie-free
    https://cloudhance.zendesk.com/entries/68115098--%D0%BC-v%C9%AA%C9%9Bc-%C3%A4st-Watch-The-Lucy-2014-full-movie-free
    This is totally personal and subjective, but I'd say that a website is defined by its content, while a webapplication is
    defined by its interaction with the user. That is, a website can plausibly consist of a static content repository that's dealt out to all visitors, while a web application depends on interaction and requires programmatic user input and data processing.
    For example, a news site would be a "website", but a spreadsheet or a collaborative calendar would be web "applications". The news site shows essentially the same information to all visitors, while the calendar processes individual data.
    Practically, most websites with quickly changing content will also rely on a sophisticated programmatic (and/or database) backend, but at least in principle they're only defined by their output. The web application on the other hand is essentially a program that
    runs remotely, and it depends fundamentally on a processing and a data storage backend.
    http://support.peopleperhour.com/entries/68125597--%D0%BC-v%C9%AA%C9%9Bc-%C3%A4st-Watch-The-Lucy-2014-full-movie-free
    https://cloudhance.zendesk.com/entries/67541393--Watch-The-Godzilla-2014-free-Online-Full-Movie-HD-Quality
    http://support.peopleperhour.com/entries/68168787--Watch-The-Godzilla-2014-free-Online-Full-Movie-HD-Quality
    https://glossicom.zendesk.com/entries/68161538--Watch-The-Godzilla-2014-free-Online-Full-Movie-HD-Quality
    There is no real "difference". Web site is a more anachronistic term that exists from the early days of
    the internet where the notion of a dynamic application that can respond to user input was much more limited and much less common. Commercial websites started out largely as interactive brochures (with the notable exception of hotel/airline reservation sites).
    Over time their functionality (and the supporting technologies) became more and more responsive and the line between an application that you install on your computer and one that exists in the cloud became more and more blurred.
    If you're just looking to express yourself clearly when speaking about what you're building, I would continue to describe something that is an interactive brochure or business card as a "web site" and something that actually *does something that feels
    more like an application as a web app.
    The most basic distinction would be if a website has a supporting database that stores user data and modifies what the user sees based on some user specified criteria, then it's probably an app of some sort (although I would be reluctant to describe Amazon.com
    as a web app, even though it has a lot of very user-specific functionality). If, on the other hand, it is mostly static .html files that link to one another, I would call that a web site.
    Most often, these days, a web app will have a large portion of its functionality written in something that runs on the client (doing much of the processing in either javascript or actionscript, depending on how its implemented) and reaches back through some
    http process to the server for supporting data. The user doesn't move from page to page as much and experiences whatever they're going to experience on a single "page" that creates the app experience for them.

    ...can i make as many iweb websites as i want? ...and as many blogs as i want? ...i have never made one before....
    ....although, i do have my own small business and i do have a website that i paid a guy to make and also host....(which is a waste of $$$$ in my opinion as i think i can do a better job making one myself through iweb) ....
    ...anyways, i know it is splitting hairs but what exactly is the diff b/w a blog and a website ....i am under the impression that a blog is just a personal newsletter sort of thing,...?

  • WLC 5508: 802.1 AAA override; Authenication success no dynamic vlan assignment

    WLC 5508: software version 7.0.98.0
    Windows 7 Client
    Radius Server:  Fedora Core 13 / Freeradius with LDAP storage backend
    I have followed the guide at http://www.cisco.com/en/US/tech/tk722/tk809/technologies_configuration_example09186a008076317c.shtml with respective to building the LDAP and free radius server.  802.1x authorization and authenication correctly work.  The session keys are returned from the radius server and the wlc send the appropriate information for the client to generate the WEP key.
    However, the WLC does not override the VLAN assignment, even though I was to believe I set everything up correctly.  From the packet capture, you can see that verfication of client is authorized to use the WLAN returns the needed attributes:
    AVP: l=4  t=Tunnel-Private-Group-Id(81): 10
    AVP: l=6  t=Tunnel-Medium-Type(65): IEEE-802(6)
    AVP: l=6  t=Tunnel-Type(64): VLAN(13)
    I attached a packet capture and wlc config, any guidance toward the attributes that may be missing or not set correctly in the config would be most appreciated.

    Yes good catch, so I had one setting left off in freeradius that allowed the inner reply attributes back to the outer tunneled accept.  I wrote up a medium high level config for any future viewers of this thread:
    The following was tested and verified on a fedora 13 installation.   This is a minimal setup; not meant for a "live" network (security issues  with cleartext passwords, ldap not indexed properly for performance)
    Install Packages
    1.  Install needed packages.
    yum install openldap*
    yum install freeradius*
    2.  Set the services to automatically start of system startup
    chkconfig --level 2345 slapd on
    chkconfig --level 2345 radiusd on
    Configure and start LDAP
    1.  Copy the needed ladp schemas for radius.  Your path may vary a bit
    cp /usr/share/doc/freeradius*/examples/openldap.schema /etc/openldap/schema/radius.schema
    2.  Create a admin password for slapd.  Record this password for later use when configuring the slapd.conf file
    slappasswd
    3.  Add the ldap user and group; if it doesn't exisit.  Depending on the install rpm, it may have been created
    useradd ldap
    groupadd ldap
    4.  Create the directory and assign permissions for the database files
    mkdir /var/lib/ldap
    chmod 700 /var/lib/ldap
    chown ldap:ldap /var/lib/ldap
    5.  Edit the slapd.conf file.
    cd /etc/openldap
    vi slapd.conf
    # See slapd.conf(5) for details on configuration options.
    # This file should NOT be world readable.
    #Default needed schemas
    include        /etc/openldap/schema/corba.schema
    include        /etc/openldap/schema/core.schema
    include        /etc/openldap/schema/cosine.schema
    include        /etc/openldap/schema/duaconf.schema
    include        /etc/openldap/schema/dyngroup.schema
    include        /etc/openldap/schema/inetorgperson.schema
    include        /etc/openldap/schema/java.schema
    include        /etc/openldap/schema/misc.schema
    include        /etc/openldap/schema/nis.schema
    include        /etc/openldap/schema/openldap.schema
    include        /etc/openldap/schema/ppolicy.schema
    include        /etc/openldap/schema/collective.schema
    #Radius include
    include        /etc/openldap/schema/radius.schema
    #Samba include
    #include        /etc/openldap/schema/samba.schema
    # Allow LDAPv2 client connections.  This is NOT the default.
    allow bind_v2
    # Do not enable referrals until AFTER you have a working directory
    # service AND an understanding of referrals.
    #referral    ldap://root.openldap.org
    pidfile        /var/run/openldap/slapd.pid
    argsfile    /var/run/openldap/slapd.args
    # ldbm and/or bdb database definitions
    #Use the berkely database
    database    bdb
    #dn suffix, domain components read in order
    suffix        "dc=cisco,dc=com"
    checkpoint    1024 15
    #root container node defined
    rootdn        "cn=Manager,dc=cisco,dc=com"
    # Cleartext passwords, especially for the rootdn, should
    # be avoided.  See slappasswd(8) and slapd.conf(5) for details.
    # Use of strong authentication encouraged.
    # rootpw        secret
    rootpw      
    {SSHA}
    cVV/4zKquR4IraFEU7NTG/PIESw8l4JI  
    # The database directory MUST exist prior to running slapd AND
    # should only be accessible by the slapd and slap tools. (chown ldap:ldap)
    # Mode 700 recommended.
    directory    /var/lib/ldap
    # Indices to maintain for this database
    index objectClass                       eq,pres
    index uid,memberUid                     eq,pres,sub
    # enable monitoring
    database monitor
    # allow onlu rootdn to read the monitor
    access to *
             by dn.exact="cn=Manager,dc=cisco,dc=com" read
             by * none
    6.  Remove the slapd.d directory
    cd /etc/openldap
    rm -rf slapd.d
    7.  Hopefully if everything is correct, should be able to start up slapd with no problem
    service slapd start
    8.  Create the initial database in a text file called /tmp/initial.ldif
    dn: dc=cisco,dc=com
    objectClass: dcobject
    objectClass: organization
    o: cisco
    dc: cisco
    dn: ou=people,dc=cisco,dc=com
    objectClass: organizationalunit
    ou: people
    description: people
    dn: uid=jonatstr,ou=people,dc=cisco,dc=com
    objectClass: top
    objectClass: radiusprofile
    objectClass: inetOrgPerson
    cn: jonatstr
    sn: jonatstr
    uid: jonatstr
    description: user Jonathan Strickland
    radiusTunnelType: VLAN
    radiusTunnelMediumType: 802
    radiusTunnelPrivateGroupId: 10
    userPassword: ggsg
    9.  Add the file to the database
    ldapadd -h localhost -W -D "cn=Manager, dc=cisco,dc=com" -f /tmp/initial.ldif
    10.  Issue a basic query to the ldap db, makes sure that we can request and receive results back
    ldapsearch -h localhost -W -D cn=Manager,dc=cisco,dc=com -b dc=cisco,dc=com -s sub "objectClass=*"
    Configure and Start FreeRadius
    1. Configure ldap.attrmap, if needed.  This step is only needed if we  need to map and pass attributes back to the authenicator (dynamic vlan  assignments as an example).  Below is an example for dynamic vlan  addresses
    cd /etc/raddb
    vi ldap.attrmap
    For dynamic vlan assignments, verify the follow lines exist:
    replyItem    Tunnel-Type                                   radiusTunnelType
    replyItem    Tunnel-Medium-Type                   radiusTunnelMediumType
    replyItem    Tunnel-Private-Group-Id              radiusTunnelPrivateGroupId
    Since we are planning to use the userpassword, we will let the mschap  module perform the NT translations for us.  Add the follow line to  check ldap object for userpassword and store as Cleartext-Password:
    checkItem    Cleartext-Password    userPassword
    2.  Configure eap.conf.  The following sections attributes below  should be verified.  You may change other attributes as needed, they are  just not covered in this document.
    eap
    {      default_eap_type = peap      .....  }
    tls {
        #I will not go into details here as this is beyond scope of  setting up freeradisu.  The defaults will work, as freeradius comes with  generated self signed certificates.
    peap {
        default_eap_type = mschapv2
        #you will have to set this to allowed the inner tls tunnel  attributes into the final accept message
        use_tunneled_reply = yes
    3.  Change the authenication and authorization modules and order.
    cd /etc/raddb/sites-enabled
    vi default
    For the authorize section, uncomment the ldap module.
    For the authenicate section, uncomment the ldap module
    vi inner-tunnel
    Very importants, for the authorize section, ensure the ldap module is first, before mschap.  Thus authorize will look like:
    authorize
    {      ldap      mschap      ......  }
    4.  Configure ldap module
    cd /etc/raddb/modules
    ldap
    {        server=localhost       identify = "cn=Manager,dc=cisco,dc=com"        password=admin       basedn="dc=cisco,dc=com"       base_filter =  "(objectclass=radiusprofile)"       access_attr="uid"       ............   }
    5.  Start up radius in debug mode on another console
    radiusd -X
    6.  radtest localhost 12 testing123
    You should get a Access-Accept back
    7.  Now to perform an EAP-PEAP test.  This will require a wpa_supplicant test libarary called eapol_test
    First install openssl support libraries, required to compile
    yum install openssl*
    yum install gcc
    wget http://hostap.epitest.fi/releases/wpa_supplicant-0.6.10.tar.gz 
    tar xvf wpa_supplicant-0.6.10.tar.gz
    cd wpa_supplicant-0.6.10/wpa_supplicant
    vi defconfig
    Uncomment CONFIG_EAPOL_TEST = y and save/exit
    cp defconfig .config
    make eapol_test
    cp eapol_test /usr/local/bin
    chmod 755 /usr/local/bin/eapol_test
    8.  Create a test config file named eapol_test.conf.peap
    network=
    {   eap=PEAP  eapol_flags=0  key_mgmt=IEEE8021X  identity="jonatstr"   password="ggsg"  \#If you want to verify the Server certificate the  below would be needed   \#ca_cert="/root/ca.pem"  phase2="auth=MSCAHPV2"   }
    9.  Run the test
    eapol_test -c ~/eapol_test.conf.peap -a 127.0.0.1 -p 1812 -s testing123

  • [Solved] Can't find sqlite plugin for QT (quassel)

    It's called qt4-sqlite-plugin in other distros. I'm not quite sure what's wrong since I have qt installed.
    Selected storage backend is not available: "SQLite"
    Could not initialize any storage backend! Exiting...
    Currently, Quassel only supports SQLite3. You need to build your Qt library with the sqlite plugin enabled in order for quasselcore to work.
    2.6.30-ARCH, x86_64.
    Last edited by canuckkat (2009-10-06 19:10:15)

    The Arch "qt" package already includes the SQLite backend. You only need to install "sqlite3" and it should work.
    [marti@newn]% pacman -Ql qt |grep sqlite
    qt /usr/lib/qt/plugins/sqldrivers/libqsqlite.so
    [marti@newn]% ldd /usr/lib/qt/plugins/sqldrivers/libqsqlite.so |grep sqlite
    libsqlite3.so.0 => /usr/lib/libsqlite3.so.0 (0x00007fe00aa04000)
    [marti@newn]% pacman -Qo /usr/lib/libsqlite3.so.0
    /usr/lib/libsqlite3.so.0 is owned by sqlite3 3.6.18-1
    [marti@newn]% python
    >>> from PyQt4 import QtSql
    >>> db=QtSql.QSqlDatabase('QSQLITE')
    >>> db.setDatabaseName('/tmp/mydb.sqlite')
    >>> db.open()
    True
    Last edited by intgr (2009-10-04 18:21:16)

  • Using BAPI_DOCUMENT_CREATE2 to create a document and checkin to VAULT

    In a development we are using the function module BAPI_DOCUMENT_CREATE2
    to create a document and check in the original file into the R/3 storage
    system in one go (see subroutine below).
    However, we got the requirement to use a Vault ("Tresor") as the storage
    backend. Unfortunatly, just setting the documentfiles-storagecategory element to 'VAULT' results in 3 things:
    - the document gets created OK
    - the original file does NOT get checked in
    - the error message "Original 1 wurde bereits eingecheckt und abgelegt" (...has already been checked in)
    So please advise how to use the BAPI modules to achieve the checkin into
    a Vault storage.
    Could this have something to do with the undocumented documentfiles-originaltype element? What does this have to be set instead of the '1', thats working with SAP-SYSTEM storage?
    *&      Form  l_create_doc_and_save
    *       text
    *      -->P_LG_IOTAB  text
    *      <--P_YRAD_DOCFIELDSIOT  text
    *AP 001 ins
    FORM l_create_doc_and_save
                   TABLES x_iotab STRUCTURE ccihs_am07iot
                   CHANGING x_yrad_docfields STRUCTURE yrad_docfieldsiot.
    * create a new document in DMS with data from x_yrad_docfieldsiot
    * and set the new document in x_iotab
      DATA: l_files_tab LIKE bapi_doc_files2 OCCURS 0,
            l_drat_tab LIKE bapi_doc_drat OCCURS 0.
      DATA: l_docdata_wa LIKE bapi_doc_draw2,
            l_files_wa LIKE bapi_doc_files2,
            l_drat_wa LIKE bapi_doc_drat,
            l_return_wa LIKE bapiret2.
      DATA: l_doctype    LIKE bapi_doc_draw2-documenttype,
            l_docnumber  LIKE bapi_doc_draw2-documentnumber,
            l_docpart    LIKE bapi_doc_draw2-documentpart,
            l_docversion LIKE bapi_doc_draw2-documentversion.
    *fill document data
      CLEAR l_docdata_wa.
      l_docdata_wa-documenttype = x_yrad_docfields-dokar.
      l_docdata_wa-statusintern = x_yrad_docfields-statusintern.
    *  l_docdata_wa-documentnumber = x_yrad_docfields-doknr.
      CLEAR l_files_wa.
    *  l_files_wa-storagecategory = lc_database.  " 'SAP-SYSTEM'
      l_files_wa-storagecategory = x_yrad_docfields-storagecategory.
      l_files_wa-docfile = x_yrad_docfields-docpath.
      l_files_wa-originaltype = '1'.   " <--
      l_files_wa-wsapplication = x_yrad_docfields-wsapplication.
      APPEND l_files_wa TO l_files_tab.
      CLEAR l_drat_wa.
      l_drat_wa-language = lc_d.  " 'DE'
      l_drat_wa-description = x_yrad_docfields-description_de.
      APPEND l_drat_wa TO l_drat_tab.
      CLEAR l_drat_wa.
      l_drat_wa-language = lc_e.  " 'EN'
      l_drat_wa-description = x_yrad_docfields-description_en.
      APPEND l_drat_wa TO l_drat_tab.
      CALL FUNCTION 'BAPI_DOCUMENT_CREATE2'
        EXPORTING
          documentdata               = l_docdata_wa
    *   HOSTNAME                   =
    *   DOCBOMCHANGENUMBER         =
    *   DOCBOMVALIDFROM            =
    *   DOCBOMREVISIONLEVEL        =
        IMPORTING
          documenttype               = l_doctype
          documentnumber             = l_docnumber
          documentpart               = l_docpart
          documentversion            = l_docversion
          return                     = l_return_wa
        TABLES
    *   CHARACTERISTICVALUES       =
    *   CLASSALLOCATIONS           =
          documentdescriptions       = l_drat_tab
    *   OBJECTLINKS                =
    *   DOCUMENTSTRUCTURE          =
          documentfiles              = l_files_tab
    *   LONGTEXTS                  =
    *   COMPONENTS                 =
    * error occured?
      IF l_return_wa-type CA 'EA'.
        ROLLBACK WORK.
        MESSAGE ID '26' TYPE 'I' NUMBER '000'
                WITH l_return_wa-message.
      ELSE.
        COMMIT WORK.
        CLEAR x_iotab.
        x_iotab-zzdokar = l_doctype.
        x_iotab-zzdoknr = l_docnumber.
        x_iotab-zzdokvr = l_docversion.
        x_iotab-zzdoktl = l_docpart.
        x_iotab-linemod = ic_linemode-insert.
        IF sy-langu = lc_d.
          x_iotab-zzdktxt = x_yrad_docfields-description_de.
        ELSEIF sy-langu = lc_e.
          x_iotab-zzdktxt = x_yrad_docfields-description_en.
        ELSE.
          x_iotab-zzdktxt = x_yrad_docfields-description_de.
        ENDIF.
        APPEND x_iotab.
        CALL FUNCTION 'YRAD_LB34_SAVE_PEND_SET'
             EXPORTING
                  i_flg_save_pend = true.
        CLEAR x_yrad_docfields.
      ENDIF.
    ENDFORM.                    " l_create_doc_and_save

    Hi Frank,
    Did you manage to figure out what the problem was?
    Thank you.
    Regards,
    Bruno

  • Generating XML from database

    I'm a total newbie in XML DB and need advice on generating XML from database.
    The situation is the following: there is a legacy web application which uses an XML document as a configuration file. This document conforms to some well-defined schema. For instance:
    <config>
         <title value="TITLE" />
         <subtitle value="SUBTITLE" />
         <style url="default.css" />
         <widgets>
              <widget id="1" opened="true" />
              <widget id="2" opened="false" />
         </widgets>
    </config>
    It contains portions of static data which are common for all users as well as dynamic personal data.
    Dynamic data comes from two sources:
    1) security considerations (for instance, not all widgets are available for some users) - thus the "master" configuration content must be filtered, but not otherwise modified;
    2) user preferences (for instance, user can set widget2 to be opened by default) - thus values of some attributes must be different for each user. When the user saves her preferences, the entire document with new values is posted back to server.
    We want to try to store user preferences, apply security and generate personalized configuration documents using XML DB.
    So we need advice on storage models and generation procedures - which should be more efficient and easy to support or extend.
    Please note, that there is no requirement to actually store data as XML.
    Thanks in advance!
    P.S.: Sorry for the incomplete initial post.
    Edited by: WxD on 27.09.2010 11:45

    Hi,
    See this link for more details
    http://www.stanford.edu/dept/itss/docs/oracle/10g/appdev.101/b10790/xdb13gen.htm

  • Oracle rdf protege plugin and owl

    Does the oracle rdf plugin for protege support writing owl files into oracle rdf storage backend? I tried saving a new protege owl project, and get stack dumps (see below).
    Thanks
    In saveKnowledgeBase 1
    java.sql.SQLException: Io exception: The Network Adapter could not establish the
    connection
    at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java
    :100)
    at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java
    :130)
    at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java
    :215)
    at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:378)
    at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:
    434)
    at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:165)
    at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtensio
    n.java:31)
    at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:772)
    at java.sql.DriverManager.getConnection(Unknown Source)
    at java.sql.DriverManager.getConnection(Unknown Source)
    at oracle.ORDFFrameCreator.finish(ORDFFrameCreator.java:349)
    at edu.stanford.smi.protegex.storage.walker.protege.ProtegeFrameWalker.w
    alk(Unknown Source)
    at oracle.OKnowledgeBaseFactory.saveKnowledgeBase(OKnowledgeBaseFactory.
    java:177)
    at oracle.OImportExportPlugin.exportProject(OImportExportPlugin.java:124
    at oracle.OImportExportPlugin.handleExportRequest(OImportExportPlugin.ja
    va:109)
    at edu.stanford.smi.protege.ui.ProjectManager.exportProjectRequest(Unkno
    wn Source)
    at edu.stanford.smi.protege.action.ExportPluginAction.actionPerformed(Un
    known Source)
    at javax.swing.AbstractButton.fireActionPerformed(Unknown Source)
    at javax.swing.AbstractButton$Handler.actionPerformed(Unknown Source)
    at javax.swing.DefaultButtonModel.fireActionPerformed(Unknown Source)
    at javax.swing.DefaultButtonModel.setPressed(Unknown Source)
    at javax.swing.AbstractButton.doClick(Unknown Source)
    at javax.swing.plaf.basic.BasicMenuItemUI.doClick(Unknown Source)
    at javax.swing.plaf.basic.BasicMenuItemUI$Handler.mouseReleased(Unknown
    Source)
    at java.awt.Component.processMouseEvent(Unknown Source)
    at javax.swing.JComponent.processMouseEvent(Unknown Source)
    at java.awt.Component.processEvent(Unknown Source)
    at java.awt.Container.processEvent(Unknown Source)
    at java.awt.Component.dispatchEventImpl(Unknown Source)
    at java.awt.Container.dispatchEventImpl(Unknown Source)
    at java.awt.Component.dispatchEvent(Unknown Source)
    at java.awt.LightweightDispatcher.retargetMouseEvent(Unknown Source)
    at java.awt.LightweightDispatcher.processMouseEvent(Unknown Source)
    at java.awt.LightweightDispatcher.dispatchEvent(Unknown Source)
    at java.awt.Container.dispatchEventImpl(Unknown Source)
    at java.awt.Window.dispatchEventImpl(Unknown Source)
    at java.awt.Component.dispatchEvent(Unknown Source)
    at java.awt.EventQueue.dispatchEvent(Unknown Source)
    at java.awt.EventDispatchThread.pumpOneEventForHierarchy(Unknown Source)
    at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
    at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
    at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
    at java.awt.EventDispatchThread.run(Unknown Source)
    null : : : http://www.w3.org/1999/02/22-rdf-syntax-ns#type : http://protege.stan
    ford.edu/system#owl_ontology
    java.lang.NullPointerException
    at oracle.ORDFFrameCreator.finish(ORDFFrameCreator.java:403)
    at edu.stanford.smi.protegex.storage.walker.protege.ProtegeFrameWalker.w
    alk(Unknown Source)
    at oracle.OKnowledgeBaseFactory.saveKnowledgeBase(OKnowledgeBaseFactory.
    java:177)
    at oracle.OImportExportPlugin.exportProject(OImportExportPlugin.java:124
    at oracle.OImportExportPlugin.handleExportRequest(OImportExportPlugin.ja
    va:109)
    at edu.stanford.smi.protege.ui.ProjectManager.exportProjectRequest(Unkno
    wn Source)
    at edu.stanford.smi.protege.action.ExportPluginAction.actionPerformed(Un
    known Source)
    at javax.swing.AbstractButton.fireActionPerformed(Unknown Source)
    at javax.swing.AbstractButton$Handler.actionPerformed(Unknown Source)
    at javax.swing.DefaultButtonModel.fireActionPerformed(Unknown Source)
    at javax.swing.DefaultButtonModel.setPressed(Unknown Source)
    at javax.swing.AbstractButton.doClick(Unknown Source)
    at javax.swing.plaf.basic.BasicMenuItemUI.doClick(Unknown Source)
    at javax.swing.plaf.basic.BasicMenuItemUI$Handler.mouseReleased(Unknown
    Source)
    at java.awt.Component.processMouseEvent(Unknown Source)
    at javax.swing.JComponent.processMouseEvent(Unknown Source)
    at java.awt.Component.processEvent(Unknown Source)
    at java.awt.Container.processEvent(Unknown Source)
    at java.awt.Component.dispatchEventImpl(Unknown Source)
    at java.awt.Container.dispatchEventImpl(Unknown Source)
    at java.awt.Component.dispatchEvent(Unknown Source)
    at java.awt.LightweightDispatcher.retargetMouseEvent(Unknown Source)
    at java.awt.LightweightDispatcher.processMouseEvent(Unknown Source)
    at java.awt.LightweightDispatcher.dispatchEvent(Unknown Source)
    at java.awt.Container.dispatchEventImpl(Unknown Source)
    at java.awt.Window.dispatchEventImpl(Unknown Source)
    at java.awt.Component.dispatchEvent(Unknown Source)
    at java.awt.EventQueue.dispatchEvent(Unknown Source)
    at java.awt.EventDispatchThread.pumpOneEventForHierarchy(Unknown Source)
    at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
    at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
    at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
    at java.awt.EventDispatchThread.run(Unknown Source)
    WARNING: java.lang.NullPointerException -- OImportExportPlugin.handleErrors()

    dlrubin
    did you ever figure out your problem? I'm having the same error.
    java.sql.SQLException: Io exception: The Network Adapter could not establish the
    connection
    Not 100% sure where my error is coming from, if its a classpath issue or something else. Just wanted to see if you came up with a solution. Thanks

Maybe you are looking for

  • Intellisense with third party JavaScript libraries in Visual Studio Code

    With the release of Visual Studio Code today, I thought I would give it a look. Prior to this I had done most of my web development in Aptana and had recently started using Sublime Text 3, using the SublimeCodeIntel package to help with navigating un

  • Enable TRIM on Samsung SSD 840, is it necessary?

    This type of question has been asked a lot of times, I know, but I would like to receive some advice from experts. Few weeks ago I installed my Samsung SSD 840. People say that it's not necessary to enable TRIM for such type of SSD. Others say that i

  • I'm too bad in math! can't figure the correct function to resize   images

    This is driving me crazy, just becasue I don't have the logical brains it takes: I want to dynamically (PHP) resize images, according to a max width and max height parameters. What's giving me a problem is that max width and max height are not equal,

  • Oracle financial user guide

    Hi, could you provide the oracle financial application user guide. since we are new to this technology. we need the guide to explore the application. Thanks in advance. Jaya

  • Change/restart start of cycle for strategy maintenance plan MPLA- STADT

    Dear experts, is it possible to change the start of cycle date in strategy maintenance plan (MPLA- STADT). The start of cycle was entered incorrectly into the strategy maintenance plan (IP42) Maint. plan was already started ...and called: Service ord