Servlet - xml data storage best practice

Hello - I am creating a webapp that is a combination of servlets and jsp. The app will access, store and manipulate data in an xml file. I hope to deploy and distribute the webapp as a war file. I have been told that it is a bad idea to assume that the xml file, if included in a directory of the war file will be writeable, as the servlet spec does not guarantee that war are "exploded" into real file space. For that matter, they do not guarantee that the file space is writeable at all.
So, what is the best idea for the placement of this xml file? Should I have users create a home directory for the xml file to sit in, so it can be guaranteed to be writeable? And, if so, how should I configure the webapp so they it will know where this file is kept?
Any advice would be gratefully welcomed...
Paul Phillips

Great Question, but I need to take it a little further.
First of all, my advice is to use some independent home directory for the xml file that can be located via a properties file or the like.
This will make life easier when trying to deploy to a server such as JBoss (with Catalina/Tomcat) which doesn't extract the war file into some directory. In that case you would need to access your XML file which would be residing inside a war file. I haven't tried this (sounds painful) but I suspect there may be security access problems when trying to get the FileOutputStream on a file inside the war??
Anyway.... so I recommend the independent directory away from the hustle and bustle of the servers' directories. Having said that..... I have a question in return: Where do you put a newly created (on the fly) jsp that you want accessed via your webapp?
In Tomcat its easy... just put it in the tomcat/webapps/myapp directory, but this can't be done for JBoss with integrated Tomcat (jboss-3.0.0RC1_tomcat-4.0.3).
Anyone got any ideas on that one?

Similar Messages

  • Windows 2012 R2 File Server Cluster Storage best practice

    Hi Team,
    I am designing  Solution for 1700 VDi user's . I will use Microsoft Windows 2012 R2 Fileserver Cluster to host their Profile data by using Group Policy for Folder redirection.
    I am looking best practice to define Storage disk size for User profile data . I am looking to have Single disk size of 30 TB to host user Profile data .Single disk which will spread across two Disk enclosure .
    Please let me know if if single disk of 30 Tb can become any bottle neck to hold user active profile data .
    I have SSD Writable disk in storage with FC connectivity.
    Thanks
    Ravi

    Check this
    TechEd session,
    the
    Windows Server 2012 VDI deployment Guide (pages 8,9), and 
    this article
    General considerations during volume size planning:
    Consider how long it will take if you ever have to run chkdsk. Chkdsk has gone significant improvements in 2012 R2, but it will still take a long time to run against a 30TB volume.  That's down time..
    Consider how will volume size affect your RPO, RTO, DR, and SLA. It will take a long time to backup/restore a 30 TB volume. 
    Any operation on a 30TB volume like snapshot will pose performance and additional disk space challenges.
    For these reasons many IT pros choose to keep volume size under 2TB. In your case, you can use 15x 2TB volumes instead of a single 30 TB volume. 
    Sam Boutros, Senior Consultant, Software Logic, KOP, PA http://superwidgets.wordpress.com (Please take a moment to Vote as Helpful and/or Mark as Answer, where applicable) _________________________________________________________________________________
    Powershell: Learn it before it's an emergency http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx http://technet.microsoft.com/en-us/scriptcenter/dd793612.aspx

  • Unicode Migration using National Characterset data types - Best Practice ?

    I know that Oracle discourages the use of the national characterset and national characterset data types(NCHAR, NVARCHAR) but that is the route my company has decide to take and I would like to know what is the best practice regarding this specifically in relation to stored procedures.
    The database schema is being converted by changing all CHAR, VARCHAR and CLOB data types to NCHAR, NVARCHAR and NCLOB data types respectively and I would appreciate any suggestions regarding the changes that need to be made to stored procedures and if there are any hard and fast rules that need to be followed.
    Specific questions that I have are :
    1. Do CHAR and VARCHAR parameters need to be changed to NCHAR and NVARCHAR types ?
    2. Do CHAR and VARCHAR variables need to be changed to NCHAR and NVARCHAR types ?
    3. Do string literals need to be prefixed with 'N' in all cases ? e.g.
    in variable assignments - v_module_name := N'ABCD'
    in variable comparisons - IF v_sp_access_mode = N'DL'
    in calls to other procedures passing string parameters - proc_xyz(v_module_name, N'String Parameter')
    in database column comparisons - WHERE COLUMN_XYZ = N'ABCD'
    If anybody has been through a similar exercise, please share your experience and point out any additional changes that may be required in other areas.
    Database details are as follows and the application is written in COBOL and this is also being changed to be Unicode compliant:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    NLS_CHARACTERSET = WE8MSWIN1252
    NLS_NCHAR_CHARACTERSET = AL16UTF16

    ##1. while doing a test convertion I discovered that VARCHAR paramaters need to be changed to NVARCHAR2 and not VARCHAR2, same for VARCHAR variables.
    VARCHAR columns/parameters/variables should not by used as Oracle reserves the right to change their semantics in the future. You should use VARCHAR2/NVARCHAR2.
    ##3. Not sure I understand, are you saying that unicode columns(NVARCHAR2, NCHAR) in the database will only be able to store character strings made up from WE8MSWIN1252 characters ?
    No, I meant literals. You cannot include non-WE8MSWIN1252 characters into a literal. Actually, you can include them under certain conditions but they will be transformed to an escaped form. See also the UNISTR function.
    ## Reason given for going down this route is that our application works with SQL Server and Oracle and this was the best option
    ## to keep the code/schemas consistent between the two databases
    First, you have to keep two sets of scripts anyway because syntax of DDL is different between SQL Server and Oracle. There is therefore little benefit of just keeping the data type names the same while so many things need to be different. If I designed your system, I would use a DB-agnostic object repository and a script generator to produce either SQL Server or Oracle scripts with the appropriate data types or at least I would use some placeholder syntax to replace placeholders with appropriate data types per target system in the application installer.
    ## I don't know if it is possible to create a database in SQL Server with a Unicode characterset/collation like you can in Oracle, that would have been the better option.
    I am not an SQL Server expert but I think VARCHAR data types are restricted to Windows ANSI code pages and those do not include Unicode.
    -- Sergiusz

  • NLS data conversion – best practice

    Hello,
    I have several tables originate from a database with a single byte character set. I want to load the data into a database with multi-byte character set like UTF-8, and in the future, be able to use the Unicode version of Oracle XE.
    When I'm using DDL scripts to create the tables on the new database, and after that trying to load the data, I receive a lot of error messages regarding the size of the VARCHAR2 fields (which, of course, makes sense).
    As I understand, I can solve the problem by doubling the size of the verachar2 fields: VARCHAR2(20) will become VARCHAR2(40) and so on. Another option is to use the NVARCHAR2 datatype, and retain the correlation with the number of characters in the field.
    I never used NVARCHAR2 before, so I don't know if there are any side affects on the pre-built APEX processes like Automatic DML, Automatic Row Fetch and the likes, or on the APEX import data mechanism.
    What will be the best practice solution for APEX?
    I'll appreciate any comments on the subjects,
    Arie.

    Hello,
    Thanks Maxim and Patrick for your replies.
    I started to answer Maxim when Patrick post came in. It's interesting as I tried to change this nls_length_semantics parameter once before, but without any success. I even wrote an APEX procedure to run over all my VARCHAR2 columns, and change them to something like VARCHAR2(20 char). However, I wasn't satisfied with this solution, partially because what Patrick said about developers forgetting the full syntax, and partially because I read that some of the internal procedures (mainly with LOBs) do not support this character mode and always working with byte mode.
    Changing the nls_length_semantics parameter seems like a very good solution, mainly because, as Patrick wrote, " The big advantage is that you don't have to change any scripts or PL/SQL code."
    I'm just curious, what is the technique APEX is using to run on all various, SB and MB character sets?
    Thanks,
    Arie.

  • Data access best practice

    Oracle web site has an article talking about the 9iAS best practice. Predefining column type in the select statement is one of topics. The detail is following.
    3.5.5 Defining Column Types
    Defining column types provides the following benefits:
    (1) Saves a roundtrip to the database server.
    (2) Defines the datatype for every column of the expected result set.
    (3) For VARCHAR, VARCHAR2, CHAR and CHAR2, specifies their maximum length.
    The following example illustrates the use of this feature. It assumes you have
    imported the oracle.jdbc.* and java.sql.* interfaces and classes.
    //ds is a DataSource object
    Connection conn = ds.getConnection();
    PreparedStatement pstmt = conn.prepareStatement("select empno, ename, hiredate from emp");
    //Avoid a roundtrip to the database and describe the columns
    ((OraclePreparedStatement)pstmt).defineColumnType(1,Types.INTEGER);
    //Column #2 is a VARCHAR, we need to specify its max length
    ((OraclePreparedStatement)pstmt).defineColumnType(2,Types.VARCHAR,12);
    ((OraclePreparedStatement)pstmt).defineColumnType(3,Types.DATE);
    ResultSet rset = pstmt.executeQuery();
    while (rset.next())
    System.out.println(rset.getInt(1)+","+rset.getString(2)+","+rset.getDate(3));
    pstmt.close();
    Since I'm new to 9iAS, I'm not sure whether it's true that 9iAS really does an extra roundtrip to database just for the data type of the columns and then another roundtrip to get the data. Anyone can confirm it? Besides the above example uses the Oracle proprietary information.
    Is there any way to trace the db activities on the application server side without using enterprise monitor tool? Weblogic can dump all db activities to a log file so that they can be reviewed.
    thanks!

    Dear Srini,
    Data level Security is not at all issue for me. Have already implement it and so far not a single bug in testing is caught.
    It's about object level security and that too for 6 different types of user demanding different reports i.e. columns and detailed drill downs are different.
    Again these 6 types of users can be read only users or power users (who can do ad hoc analysis) may be BICONSUMER and BIAUTHOR.
    so need help regarding that...as we have to take decision soon.
    thanks,
    Yogen

  • Data Migration Best Practice

    Is the a clear cut best practice procedure for conducting data migration from one company to a new one ?

    I don't think there is a clear cut for that.  Best Practice would always be relative.  It varies dramatically depending on many factors.  There is no magical bullet here.
    One except for above: you should always use Tab delimited Text format.  It is DTW friendly format.
    Thanks,
    Gordon

  • XML data storage - How to handle symbol ' inside xml data from java code

    Hi
    I'm trying to store XML data in Oracle XML DB 10gR2 both through SQL and JAVA .
    I have a simple test table called xmltable with a primary key and a XMLType column
    My sql is:
    insert into xmltable values ('020', sys.XMLType.CreateXML(
    '<?xml version="1.0"?>
    <Mpeg7 xmlns="urn:mpeg:mpeg7:schema:2001">
    <DescriptionMetadata id="2005">
    <LastUpdate>2006-10-19T12:48:22.null+01:00</LastUpdate>
    <Creator/>
    <CreationTime>2006-10-19T12:48:22.null+01:00</CreationTime>
    <Instrument>
    <Tool>
    <Name>LAS MPEG-7 Services v1.0</Name>
    </Tool>
    </Instrument>
    </DescriptionMetadata>
    <Description xsi:type="urn:ContentEntityType" xmlns:urn="urn:mpeg:mpeg7:schema:2001" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <MultimediaContent xsi:type="urn:ImageType">
    <Image id="2005">
    <MediaInformation>
    <MediaProfile>
    <MediaFormat>
    <Content/>
    <Medium href="urn:mpeg:MPEG7MediumCS:2.1.1"/>
    <FileFormat href="urn:mpeg:IPTCMimeTypeCS:image/jpeg"/>
    <FileSize>2782043</FileSize>
    <VisualCoding>
    <Format colorDomain="color" href="urn:mpeg:MPEG7VisualCodingFormatCS:1"/>
    <Frame height="11604" width="8676"/>
    </VisualCoding>
    </MediaFormat>
    </MediaProfile>
    </MediaInformation>
    <MediaLocator>
    <MediaUri>oracle:bosch.informatik.rwth-aachen.de:1521/SMILEY/AMSDB.MEDIA_IMAGE/IMAGE[IMG_ID=2005]</MediaUri>
    </MediaLocator>
    <TextAnnotation>
    <FreeTextAnnotation>B?miy?n and Shikari areas</FreeTextAnnotation>
    </TextAnnotation>
    <Semantic>
    <SemanticBaseRef href="oracle:bosch.informatik.rwth-aachen.de:1521/SMILEY/AMSDB.OBJECT_XML/MPEG7_SB#/mpeg7:Mpeg7/mpeg7:DescriptionMetadata[@id='65']"/>
    <SemanticBaseRef href="oracle:bosch.informatik.rwth-aachen.de:1521/SMILEY/AMSDB.OBJECT_XML/MPEG7_SB#/mpeg7:Mpeg7/mpeg7:DescriptionMetadata[@id='148']"/>
    </Semantic>
    </Image>
    </MultimediaContent>
    </Description>
    </Mpeg7>
    Unfortunately I'm facing problem with the symbol ' at the tags:
    <SemanticBaseRef href="oracle:bosch.informatik.rwth-aachen.de:1521/SMILEY/AMSDB.OBJECT_XML/MPEG7_SB#/mpeg7:Mpeg7/mpeg7:DescriptionMetadata[@id='65']"/>
    <SemanticBaseRef href="oracle:bosch.informatik.rwth-aachen.de:1521/SMILEY/AMSDB.OBJECT_XML/MPEG7_SB#/mpeg7:Mpeg7/mpeg7:DescriptionMetadata[@id='148']"/>
    If I tried with double ' (not '' ) it works with SQL through SQLPLUS but not with java. Indeed I have to store the data through my application
    In java I get java.sql.SQLException: ORA-19102
    Many thanks in advance,
    Evanela

    Thanks for your answer. I downloaded the JDBC AddressBook application and I found it very usefull. I was able to populate a datalist from a datasource and show it in my application. My problem still remains in the sense that I am unable to show the KPI data in a "custom node" meaning I woul like not only to extract directly the database record to a datalist object but also put the information of a single record in a textbok and a gauge and keep control of paging. I think this is done with the parser in the other examples but I can not figure out how to do it with MySQL and the new Desktop screen designer. Should I use a clip object? or is that managed already by the desktop screen designer? or where can I find an example with the functionality I need?.

  • Using XML with Flex - Best Practice Question

    Hi
    I am using an XML file as a dataProvider for my Flex
    application.
    My application is quite large and is being fed a lot of data
    – therefore the XML file that I am using is also quite large.
    I have read some tutorials and looked thorough some online
    examples and am just after a little advice. My application is
    working, but I am not sure if I have gone about setting and using
    my data provider in the best possible (most efficient) way.
    I am basically after some advice as to weather I am going
    about using (accessing) my XML and populating my Flex application
    is the best / most efficient way???
    My application consists of the main application (MXML) file
    and also additional AS files / components.
    I am setting up my connection to my XML file within my main
    application file using HTTPService :
    <mx:HTTPService
    id="myResults"
    url="
    http://localhost/myFlexDataProvider.xml"
    resultFormat="e4x"
    result="myResultHandler(event)" />
    and handling my results with the following function:
    public function myResultHandler(event:ResultEvent):void
    myDataFeed = event.result as XML;
    within my application I am setting my variable values by
    firstly delacring them:
    public var fName:String;
    public var lName:String;
    public var postCode:string;
    public var telNum:int;
    And then, giving them a value by “drilling” into
    the XML, E;g:
    fName = myDataFeed.employeeDetails.contactDetails.firstName;
    lName = myDataFeed.employeeDetails.contactDetails.lastName;
    postCode =
    myDataFeed.employeeDetails.contactDetails.address.postcode;
    telNum = myDataFeed.employeeDetails.contactDetails.postcode;
    etc…
    Therefore, for any of my external (components in a different
    AS file) components, I am therefore referencing there values using
    Application:
    import mx.core.Application;
    And setting the values / variables within the AS components
    as follows:
    public var fName:String;
    public var lName:String;
    fName =
    Application.application.myDataFeed.employeeDetails.contactDetails.firstName;
    lName =
    Application.application.myDataFeed.employeeDetails.contactDetails.lastName;
    As mentioned this method seems to work, however, is it the
    best way to do it??? :
    - Connect to my XML file
    - Set up my application variables
    - Give my variables values from the XML file ……
    Bearing in mind that in this particular application there are
    many variable that need to be set and there for a lot of lines of
    code just setting up and assigning variables values from my XML
    file.
    Could someone Please advise me on this one????
    Thanks a lot,
    Jon.

    I don't see any problem with that.
    Your alternatives are to skip the instance variables and
    query the XML directly. If you use the values in a lot of places,
    then the Variables will be easier to use and maintain.
    Also, instead of instance variables, you colld put the values
    in an "associative array" (object/hashtable), or in a dictionary.
    Tracy

  • Data Mining Best Practices

    Our organization is just beginning to use Data Mining in BI. We're trying to understand what the typical protocol is for moving data models into production? Is it standard practice to create the data models directly in the Production system or are these changes typically transported into the Production system? I have been unable to find any information on this in my research and would appreciate any input to help guide our decisions.
    Thanks,
    Nicole Daley

    Hi There,
    You're on the right track, here are a few additional guidelines:
    1. Determine your coverage levels along with the desired minimum data rate required by your application(s). Disabling lower data rates does have a significant impact to your coverage area.
    2. You have already prevented 802.11b clients by disabling  1,2,5.5,11 -- so that piece is taken care of.
    3. Typically, we see deployments having the lowest enabled data rate set to mandatory. This allows for the best client compatibility. You can also have higher mandatory rates, but then you need to confirm that all client devices will in fact support those higher rates. (Most clients do, but there are some exceptions). Worth noting here is that multicast traffic will be sent out at the highest mandatory data rate -- so if you have the need for higher bandwidth multicast traffic, you may want to have another data rate(s) set as mandatory.
    -Patrick Croak
    Wireless TAC

  • Multi-user, Multi-mac, single-storage best practices?

    I wouldn't share the MacBookPro so my wife finally replaced her PC with a new iMac. We want to store our big music collection in one place (either the iMac or external USB disk. Both machines presently use WiFi connectivity through our older round AiportExtreme, though I'd consider upgrading if the Airport Disk sharing would make this simple. We also presently use Airport Express to play music from the laptop to our home audio system and will continue to use the laptop for this. Presently we each have one library for laptop/iGadgets. Ideally we could share the library files across machines (in something akin to an NFS/Celerra mount in the Unix world) so that we don't have to add music more than once per person and I could recover laptop disk space. Is it possible to point multiple machines at the same library xml/itl files, or at least synch them somehow (maybe dotmac) to both machines and how would one configure that? My knowledge of mac networking is very small, but I'm tech-savvy in the Win/Unix world. Is the network latency prohibitively slow, particularly when pulling files through WiFi from remote disk and playing back remotely to AirPort Express? We don't want it to stop every 5 seconds to buffer. I welcome suggestions for the best way to proceed. Thanks in advance.

    dead thread

  • Help!  (Data Recovery best practices question)

    Recently my fiancé's Macbook (first White model that used an Intel chipset) running 10.6.2 began to behave strange (slow response time, hanging while applications launch, etc). I decided to take an old external USB HD I had lying around and format it on my MBP in order to time machine her photo's and itunes library. Time machine would not complete a backup and I could not get any of the folders to copy through finder(various file corrupt errors). I assumed it could be a permission issue so I inadvertantly fired up my 10.5 disk and did a permission repair. Afterwards the disk was even more flaky (which I believe was self inflicted when I repaired with 10.5).
    I've since created a 10.6.2 bootable flash key and went out and bought Disk Warrior (4.2). I ran a directory repair and several disk util repairs but was still unable to get the machine to behave properly (and unable to get time machine to complete). Attempting to run permission repairs while booted to USB or the Snow Leopard install disk resulted in it hanging at the '1 minute remaining' for well over an hour. My next step was to re-install Snow Leopard but the install keeps failing after the progress bar completes.
    As it stands now the volume on the internal HD is not bootable and I'm running off my usb key boot drive using 'CP -R *' in terminal to copy her user folder onto the external USB hard drive. It seems to be working, but it's painfully slow (somewhere along the lines of maybe 10 meg per half an hour with 30gb to copy) I'm guessing this speed has to do with my boot volume running off a flash drive.
    I'm thinking of running out and grabbing a firewire cable and doing a target boot from my MBP hoping that that would be a lot faster than what I'm experiencing now. My question is, would that be the wisest way to go? My plan of action was to grab her pictures and music then erase and reformat the drive. Is it possible that I could try something else with Disk Warrior? I've heard a lot of good things about it but I fear that I did a number on it when I accidently ran 10.5 permission repair on the volume.
    Any additional help would be appreciated as she has years of pictures on there that I'd hate to see her loose.

    That sounds like a sensible solution, although you need not replace the original drive. Install OS X on the external drive, boot from it and copy her data. Then erase her drive and use Disk Utility's Restore option to clone the external drive to the internal drive. If that works then she should continue using the external drive as a backup so the next time this happens she can restore from the backup.
    For next time: Repairing permissions is not a troubleshooting tool. It's rarely of any use and it does not repair permissions in a Home folder. If a system is becoming unresponsive or just slower then there's other things you should do. See the following:
    Kappy's Personal Suggestions for OS X Maintenance
    For disk repairs use Disk Utility. For situations DU cannot handle the best third-party utilities are: Disk Warrior; DW only fixes problems with the disk directory, but most disk problems are caused by directory corruption; Disk Warrior 4.x is now Intel Mac compatible. TechTool Pro provides additional repair options including file repair and recovery, system diagnostics, and disk defragmentation. TechTool Pro 4.5.1 or higher are Intel Mac compatible; Drive Genius is similar to TechTool Pro in terms of the various repair services provided. Versions 1.5.1 or later are Intel Mac compatible.
    OS X performs certain maintenance functions that are scheduled to occur on a daily, weekly, or monthly period. The maintenance scripts run in the early AM only if the computer is turned on 24/7 (no sleep.) If this isn't the case, then an excellent solution is to download and install a shareware utility such as Macaroni, JAW PseudoAnacron, or Anacron that will automate the maintenance activity regardless of whether the computer is turned off or asleep. Dependence upon third-party utilities to run the periodic maintenance scripts had been significantly reduced in Tiger and Leopard. These utilities have limited or no functionality with Snow Leopard and should not be installed.
    OS X automatically defrags files less than 20 MBs in size, so unless you have a disk full of very large files there's little need for defragmenting the hard drive. As for virus protection there are few if any such animals affecting OS X. You can protect the computer easily using the freeware Open Source virus protection software ClamXAV. Personally I would avoid most commercial anti-virus software because of their potential for causing problems.
    I would also recommend downloading the shareware utility TinkerTool System that you can use for periodic maintenance such as removing old logfiles and archives, clearing caches, etc. Other utilities are also available such as Onyx, Leopard Cache Cleaner, CockTail, and Xupport, for example.
    For emergency repairs install the freeware utility Applejack (not compatible with Snow Leopard.) If you cannot start up in OS X, you may be able to start in single-user mode from which you can run Applejack to do a whole set of repair and maintenance routines from the commandline. Note that AppleJack 1.5 is required for Leopard. AppleJack is not compatible with Snow Leopard.
    When you install any new system software or updates be sure to repair the hard drive and permissions beforehand. I also recommend booting into safe mode before doing system software updates.
    Get an external Firewire drive at least equal in size to the internal hard drive and make (and maintain) a bootable clone/backup. You can make a bootable clone using the Restore option of Disk Utility. You can also make and maintain clones with good backup software. My personal recommendations are (order is not significant):
    1. Retrospect Desktop (Commercial - not yet universal binary)
    2. Synchronize! Pro X (Commercial)
    3. Synk (Backup, Standard, or Pro)
    4. Deja Vu (Shareware)
    5. Carbon Copy Cloner (Donationware)
    6. SuperDuper! (Commercial)
    7. Intego Personal Backup (Commercial)
    8. Data Backup (Commercial)
    9. SilverKeeper 2.0 (Freeware)
    10. MimMac (Commercial)
    11. Tri-Backup (Commercial)
    Visit The XLab FAQs and read the FAQs on maintenance, optimization, virus protection, and backup and restore.
    Additional suggestions will be found in Mac Maintenance Quick Assist.
    Referenced software can be found at www.versiontracker.com and www.macupdate.com.

  • Manage Portal data ?   best practice type thing

    Hello,
    I am looking how best do set up a Portal System on Production, in particular a
    good process to back-up and re-create an Portal systems.
    I list some possibilities below. Does anyone know if this is how it is typically
    done ? Does this cover all data which should be backed up / migrated ?
    thanks!
    1- 'Entitlements' data. As far as I know, this is stored in the embedded LDAP.
    Can this be extracted ?
    2- DataSynch data.
    DataSynch web application.
    extract with ftp-like command
    upload as jar file
    3- Users and Groups.
    Export to a dat file. (Suddenly I forget how to do this, though think I saw it
    somewhere).

    Okey, and then using a RFC call from the webdynpro application to fetch data from the sap database?
    This answered my question:
    Best regards
    Øyvind Isaksen

  • Data Load Best Practice.

    Hi,
    I needed to know what is the best way to load the data from a source. Is the SQL load the best way or using data files better? What are the inherent advantages and disadvantages of the two processes?
    Thanks for any help.

    I have faced a scenario that explaining here
    I had an ASO cube and data is being load from txt file daily basis and data was huge. There is some problem in data file as well as Master file (file that is being used for dimension building).
    Data and master file has some special character like ‘ , : ~ ` # $ % blank spaces and tab spaces, even ETL process cannot remove these things because this is coming within a data.
    Sometimes any comment or database error were also present in data file.
    I faced problem with making rule file with different delimiter, most of the time I find same character within data that is used as a delimiter. So its increases no of data field and Essbase give error.
    So I have used sql table a for data load .a Launch table is created and data is populated in this table. All error are removed here before using data load into Essbase
    This was my scenario (this case I find SQL load the second one is better)
    Thanks
    Dhanjit G.

  • Data Model best Practices for Large Data Models

    We are currently rolling out Hyperion IR 11.1.x and are trying to establish best practces for BQYs and how to display these models to our end users.
    So far, we have created an OCE file that limits the selectable tables to only those that are within the model.
    Then, we created a BQY that brings in the tables to a data model, created metatopics for the main tables and integrated the descriptions via lookups in the meta topics.
    This seems to be ok, however, anytime I try to add items to a query, as soon as i add columns from different tables, the app freezes up, hogs a bunch of memory and then closes itself.
    Obviously, this isnt' acceptable to be given to our end users like this, so i'm asking for suggestions.
    Are there settings I can change to get around this memory sucking issue? Do I need to use a smaller model?
    and in general, how are you all deploying this tool to your users? Our users are accustomed to a pre built data model so they can just click and add the fields they want and hit submit. How do I get close to that ideal with this tool?
    thanks for any help/advice.

    I answered my own question. in the case of the large data model, the tool by default was attempting to calculate every possible join path to get from Table A to Table B (even though there is a direct join between them).
    in the data model options, I changed the join setting to use the join path with the least number of topics. This skipped the extraneous steps and allowed me to proceed as normal.
    hope this helps anyone else who may bump into this issue.

  • Data Load -- Best Practices Analytics 9.0.1

    We are currently implementing Essbase I would be interested in feedback concerning data load practices. <BR><BR>We have a front end system which delivers live operational type data in a sql database. Currently, I use Access to run queries against the data to load into Enterprise, but I would like to move to an automated, daily load for Essbase. At this point in Essbase, I have several load rules that I apply to Excel files which were exported from Access (not a good solution). I would assume that a better answer would be a SQL load, but I wonder how others typically go about loading information. What about loading financial data consolidated in another system (Enterprise)?<BR><BR>Thanks for any feedback,<BR><BR>Chris

    Wanted to give an update of my progress today.
    I again began with a clean installation of 9.0.0.  Brought up the CF administrator and completed the installation.  From there, I went directly to installing the 9.0.1 update and the 9.0.1 hotfix.  To my amazement, the cf administrator came up with an issue. But . . .
    I then went into the administrator to install my 'customizations' (i.e. my datasources, my SMTP mail server, my custom tags, etc).  Truly nothing unusual.  Almost sad to say - vanilla.  I then shut down the service as recommended to have some of the changes 'take effect'.  Boom, the cf administrator no longer appears but gives me the blank screen and the same error messages I have listed in my first note.  So again, it must be "something either I turned on/off incorrectly, but don't even know where to look".
    Would this be considered a bug?
    Libby H

Maybe you are looking for