XDD Vs Offset file usage -Which is better approach

Hi,
We are designing a new workspace (on Documaker 11.5) for a client sending a flat file input. To reduce the impact of offset driven changes we are planning to use either of the below:
1) XDD
2) OFFSET.DAT file for which we can update AFGJOB.JDT with folowing rules.
;LoadEXTOFFS;1;OFFSET.DAT;
;CUSAltSearchRec;1;;
Based on the copybook/layout of the incoming file we should be able to build a XDD or OFFSET.DAT file.
The reason I am asking this question is we already have some experience with offset file approach, and we wanted to explore if XDD is a better approach from maintanance perspective.
Regards..

The XDD extract mapping method is standard support.
The OFFSET.DAT file you mention was a pseudo-custom implementation originally submitted by CSC/PMSC for their customers. It was added to the product as a curtesy and not really promoted as a mainstream feature (if that is a good term). As such, that feature is not specifically targeted by QA testing or included in internal regression results. I'm not even sure it is documented. I doubt you would be able to get much help (or quickly) via Support if you had questions.
Therefore, the official recommendation would be to use the XDD, but if you have experience with the OFFSET.DAT method and are comfortable with the functionality, then the feature should still be available.

Similar Messages

  • Which is better approach to manage sharepoint online - PowerShell Script with CSOM or Console Application with CSOM?

    Which is better approach to manage sharepoint online - PowerShell Script with CSOM or Console Application with CSOM?
    change in sharepoint scripts not require compilation but anything else?

    Yes, PowerShell is great, since you can quick change your code without compilation.
    SP admin can write ps scripts without specific tools like Visual Studio.
    With powershell you can use cmdlets,
    which could remove a lot of code, for example restarting a service.
    [custom.development]

  • Proxy or File which is better approach

    Hi
    I am using PI 7.1. I need to pass some informations (approx. 40K records per day) from SAP CRM database to a third party application. The target communication will be file. What is the best approach for source communication? Is proxy the better option or should I write an extraction programme in SAP CRM, generate one file, and then do the transformations required in PI?
    Regards,
    Nirupam

    HI,
    First of all as this is adapter less communication, I think it is better to use in as many senario as possible. Why only for big messages. There is no limit on number of Proxy senario count ;).
    Secondly, You need ABAPer for custom ABAP program to extarct and create file. So in either case you need both ABAP and PI skill.
    Monitoring will be using same tools no extra tool is needed for proxy.
    And If you use Proxy, in future even if you message size increses you dont have to bother.
    If any change is there in the interface then also you need both ABAPer and PI Skill to make the changes.
    Hence I feel, whenever possible, it is better to use Proxy.
    Shweta.

  • TextConverter.importToFlow() Vs TextFlowUtil.importFromString() which is better approach ?

    Hi,
      I am new to spark library. When i tried to use TextFlowUtil class for handling html text, i found it has some issues with handling some html tags like <b>.
    Can any one suggest while using html text with spark components, is it  good approach to use  TextConverter.importToFlow() instead TextFlowUtil.importFromString()?
    Thanks in advance,,,,

    The XDD extract mapping method is standard support.
    The OFFSET.DAT file you mention was a pseudo-custom implementation originally submitted by CSC/PMSC for their customers. It was added to the product as a curtesy and not really promoted as a mainstream feature (if that is a good term). As such, that feature is not specifically targeted by QA testing or included in internal regression results. I'm not even sure it is documented. I doubt you would be able to get much help (or quickly) via Support if you had questions.
    Therefore, the official recommendation would be to use the XDD, but if you have experience with the OFFSET.DAT method and are comfortable with the functionality, then the feature should still be available.

  • Which is better? store files in the database or directly on the O.S.?

    Hi,
    i´m developing an application to manager files. which is better? store files in the database or directly on the O.S.? If i decide to store on database, i will use the BLOB data type but i have a doubt...the BLOB data type occupies the same space on database regardless of file size? there is another data type better?
    Tks,
    Fernando.

    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1011065100346196442
    Ask Tom has a bit of info to share on the topic. Have a read of this. :)

  • Which is better ASM or file system storage

    Hi all, I need a urgent help from all u gr8 DBAs.
    I have to give justification to my client that which is better to use between file based option ASM and why??
    So can anyone give me some write up in this line?

    Ok, how about this
    Today's large databases demand minimal scheduled downtime, and DBAs are often required to manage multiple databases with an increasing number of database files. Automatic Storage Management lets you be more productive by making some manual storage management tasks obsolete.
    The Oracle Database provides a simplified management interface for storage resources. Automatic Storage Management eliminates the need for manual I/O performance tuning. It simplifies storage to a set of disk groups and provides redundancy options to enable a high level of protection. Automatic Storage Management facilitates non-intrusive storage allocations and provides automatic rebalancing. It spreads database files across all available storage to optimize performance and resource utilization. It also saves time by automating manual storage tasks, which thereby increases their ability to manage more and larger databases with increased efficiency. Different versions of the database can interoperate with different versions of Automatic Storage Management. That is, any combination of release 10.1.x.y and 10.2.x.y for either the Automatic Storage Management instance or the database instance interoperate transparently.
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14220/mgmt_db.htm

  • High Page Reads/Sec on Windows 2008 R2 64-bit running on VMware but very low Real Memory & Page file Usage.

    Hello All,
    Below is the server configuration,
    OS: Windows 2008 R2 Enterprise 64 Bit
    Version: 6.1.7601 Service Pack 1 Build 7601
    CPU: 4 (@ 2.93 GHz, 1 core)
    Memory: 12 GB
    Page file: 12 GB
    1. The actual utilization, be it a 15 minute sample, hourly, weekly etc, the utilization of real memory has never crossed 20% and the page file usage is at 0.1%. For some reason, the Pages/Sec>Limit% counter reports 100% continuously regardless of the
    sampling intervals. Upon further observation, the Page Reads/Sec value is somewhere between 150~450 and Page Input/Sec is somewhere between 800~8000. Does this indicate a performance bottleneck? (I've in the interim asked the Users, App. Owners to see if they
    notice any performance degradation and awaiting response). If this indicates a performance issue, please could someone help list down how to track this down further to which process/memory mapped file is causing it? and what I should go about performing to
    fix this problem please?
    p.s., initially the Security logs were full on this server and since page file is tied to Application, Security and System logs, this was freed up to see if this is causing the high page reads but this doesn't.
    2. If the above does not necessarily indicate a performance problem, please can someone reference few KB articles that confirms this? Also, in this case, will there be any adverse effects if attempting to fine tune a server which is already running fine?
    assuming App. Owners confirm there isn't any performance degradation.
    Thanks in advance.

    Hi,
    Based on the description, we can try to download Server Performance Advisor (SPA) to help further analyze the performance of the server. SPA can generate comprehensive diagnostic reports and charts and provides recommendations to help you quickly analyze
    issues and develop corrective actions.
    Regarding this tool, the following articles can be referred to for more information.
    Microsoft Server Performance Advisor
    https://msdn.microsoft.com/en-us/library/windows/hardware/dn481522.aspx
    Server Performance Advisor (SPA) 3.0
    http://blogs.technet.com/b/windowsserver/archive/2013/03/11/server-performance-advisor-spa-3-0.aspx
    Best regards,
    Frank Shen
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Which is better for performance Azure SQL Database or SQL Server in Azure VM?

    Hi,
    We are building an ASP.NET app that will be running on Microsoft Cloud which I think is the new name for Windows Azure. We're expecting this app to have many simultaneous users and want to make sure that we provide excellent performance to end users.
    Here are our main concerns/desires:
    Performance is paramount. Fast response times are very very important
    We want to have as little to do with platform maintenance as possible e.g. managing OS or SQL Server updates, etc.
    We are trying to use "out-of-the-box" standard features.
    With that said, which option would give us the best possible database performance: a SQL Server instance running in a VM on Azure or SQL Server Database as a fully managed service?
    Thanks, Sam

    hello,
    SQL Database using shared resources on the Microsft data centre. Microsoft balance the resource usage of SQL Database so that no one application continuously dominates any resource.You can try the 
    Premium Preview
    for Windows Azure SQL Database which offers better performance by guaranteeing a fixed amount of dedicated resources for a database.
    If you using SQL Server instance running in a VM, you control the operating system and database configuration. And the
    performance of the database depends on many factors such as the size of a virtual machine, and the configuration of the data disks.
    Reference:
    Choosing between SQL Server in Windows Azure VM & Windows Azure SQL Database
    Regards,
    Fanny Liu
    If you have any feedback on our support, please click here. 
    Fanny Liu
    TechNet Community Support

  • Which is better - Add a dimension or create more members in an existing

    Which is better - Add a dimension or create more members in an existing dimension?
    We are trying to figure out which can give us better performance in terms of calculations and retrieving reports - to add another dimension (entity/country) or add about 500-800 more members in an existing location/division dimension?
    Thank you!

    If you have BSO cube i would recommend to add in the same dimension where as ASO you can add members in a new dimension. Adding a new dimension is like creating a new cube you have to change each and every single calc scripts,report scripts,FR reports,Webforms and your rule files .... all the dependencies has to be changed manually . I think 500 members in the exsting BSO dimension will not impact the calc or retrieval times that much .

  • Which is Better? Mac or PC?

    Which is Better?
    Intel iMac   Mac OS X (10.4.5)  

    Hello,
    I'm not going to argue that a Mac is better, but I do have a response to:
    looking computers and the software is great however
    they have major compatibility issues. for example
    many websites do not support mac's Ex.
    www.aircanada.com
    I have never had any problem exchanging documents or viewing web pages with the Mac.
    All my document exchanges have worked flawlessly, and work going both to and from Windows PC's.
    But, you need to make sure you are using the right program on the Mac if you want to share files with PC users.
    As for the website compatibility, the site you provided as an example will work just fine. They just don't want you to know it.
    What that website is doing, is running a check to see what web-browser you are using. Beyond that, they are also running a check to see which computer you are using.
    So, what you need to do is make sure that your browser tells their website what it wants to hear.
    So, in Safari, you can use the "Debug" menu to have Safari report itself as the Windows version of Internet Explorer.
    Go to the Debug menu, then pick "User Agent", then choose: "Windows MSIE 6.0".
    If the site still won't load, then go to the site first, and perform the selection again from the Debug menu.
    Basically, what you are doing is changing what Safari identifies itself as.
    Additionally, the website that you listed does list that it is compatible with Mac OS 9.0 or later. So, it should work provided you pass it's checks.
    If for some reason it still won't work, contact them since they say that it is compatible with the Mac.
    The only sites that absolutely will not work with a Mac are sites that use "Active X" to take over control of your computer or communicate directly with the operating system. Fortunately, those sites are getting rarer and rarer.
    The only site I've run across in recent times that uses Active X is Microsoft's Windows Update site (which you wouldn't need with a Mac anyway).
    As for enabling the "Debug" menu in Safari, if it is not already there, you can do that by:
    Closing Safari
    Open "Terminal" which is located at:
    Hard Drive --> Applications --> Utilities --> Terminal
    Then type the following at the command line:
    defaults write com.apple.Safari IncludeDebugMenu 1
    If you would prefer an automated method of enabling the Debug menu, you can always download the free Safari Enhancer program which includes a setting for this feature:
    http://www.lordofthecows.com/softwarelist.php

  • Swap file usage

    Hi
    We have recently updated a MII Server from Netweaver CE 7.1 SP4 with MII 12.1.5 (Build 86) to Netweaver CE 7.1 SP5 with MII 12.1.9 (Build 109). The MII server is running on a Windows Server 2003 64-bit with 8GB of RAM.
    After the update the swap file usage has increased from an average of 5-10% usage to now close to 80% usage. The only thing changed is the Netweaver and MII version, please advice how to proceed?
    Best Regards
    /Simon Bruun

    The first thing that happens on a new system has nothing to do with the swap file. It's the creation of the hibernation image file that is roughly equal to the amount of RAM in your computer. I'd guess yours is around 8 GBs.
    It's a terrible idea to try and run the computer from a flash drive. That may be useful for creating an installer disk or an emergency recovery disk that you use once in a while.
    Your only temporary solution is to change the hibernation mode and erase the hibernation sleep image file.
    Open the Terminal and paste the following at the prompt:
         sudo pmset -a hibernatemode 0
    Press RETURN.
         rm -rf /var/vm/*
    Press RETURN.
    After the first command you will be prompted to enter your admin password which is not echoed to the screen - type carefully.

  • LDOM on ZFS - ZVOL or mkfile ... which is better ?

    Curious which approach is better for using LDOMs on ZFS ... Using a ZVOL specific to each LDOM or a mkfile in a ZFS file system specific to each LDOM ?
    Both can be snapped but curious which is the better approach and why.

    Hi all,
    I have searched everywhere and a lot of what I have seen helped men move forward piece by piece.
    I have a test LDom, that I have created replicating all the commands from the LDoms - Beginners Guide - http://www.sun.com/blueprints/0207/820-0832.pdf
    with the exception that I am using zfs in stead of ufs.
    I have setup the Jumpstart as follows:
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    ldpool 15.4G 90.9G 5.38G /ld
    ldpool/test 18K 90.9G 18K /ld/test
    # more net172_sun4v
    install_type initial_install
    system_type server
    client_arch sun4v
    pool test auto auto auto mirror C0d0s0
    When I do a: boot vnet1 - install, I get the following:
    {0} ok boot vnet1 - install
    Boot device: /virtual-devices@100/channel-devices@200/network@0 File and args:
    - install
    Requesting Internet Address for 0:14:4f:f8:e4:a0
    ++++++++++++
    It is just not going anywhere from there...
    Netstat on the jumpstart server shows a connection...
    Local Address Remote Address Swind Send-Q Rwind Recv-Q State
    svrk-unixadm01.nfsd test01.printer 50680 2399 49232 0 ESTABLISHED
    Active UNIX domain sockets
    Address Type Vnode Conn Local Addr Remote Addr
    600191b7ac0 stream-ord 60019977340 00000000 /var/run/zones/svruxtst-ecxcefe01.co
    nsole_sock
    600191b7c88 stream-ord 60019116d00 00000000 /var/run/.inetd.uds
    But this is just going on for ever....

  • Coding Preference ..Which is better for memory?

    Hey all,
    Javas garbage collection is sweet. However, I was reading somewhere that setting some objects to null after I'm done with them will actually help.
    (help what .. I'm not sure.. my guess is memory used by the JVM)
    Thus I have two ways to do the same thing and I'd like to hear peoples comments on which is "better" ... or will yield faster performance.
    Task: I have a Vector of Strings (called paths) that hold absolute file paths. (Don't ask why I didn't use a String[]) I'd like to check and see if they exist, and if not, create them... I'll use the createNewFile() method for that.
    Method A -- Here I'll reuse that File object
    public void myMethod()throws Exception{
    File file = null;
    for(int i = 0; i < paths.size(); i++){
      file = new File(paths.get(i).toString());
      boolean made  = file.createNewFile();
      if(made){doSomething();}
    file = null;
    }Method B -- Here I'll use um... "dynamically made" ones that I won't eventually be set back to null
    public void myMethod()throws Exception{
    for(int i = 0; i < paths.size(); i++){
      boolean made  = (new File(paths.get(i).toString())).createNewFile();
      if(made){doSomething();}
    }So when the code eventually exists myMethod, the object "file" will be out of scope and trashed.... correct? If thats the case, then would there be any other differences between the two implementations?
    Thanks

    There's no real difference between the two. Choose the style you prefer,
    although in the first one I'd lose the "file = null" statement since that
    variable is about to disappear, and I'm move the definition into the loop
    -- always give variables as small a scope as possible, mainly to
    keep the logic simple:
    public void myMethod()throws Exception{
        for(int i = 0; i < paths.size(); i++){
            File file= new File(paths.get(i).toString());
            boolean made  = file.createNewFile();
             if(made){doSomething();}
    }

  • Which is better? Extracting images from directories or from database?

    Good day,
    I would like to start a discussion on extracting image (binary data) from a relational database. Although some might say that extracting image from directories is a better approach, I m still sceptic on that implementation.
    My argument towards this is based on the reasonings below:
    1. Easier maintainence. - System Administrator can do backup from one place which is the database.
    2. High level of security - can anyone tell me how easy it is to hack into a database server?
    3. image is not dependent on file structure - no more worries about broken links because some one might mistakenly change the directory structure. If there needs to be a change, it will be handle efficiently by the database server.
    The intention of my question is to find out :
    1. Why is taking image from a directory folder which resides on the web server is better than using the same approach from the database?
    2. How is this approach (taking image from directory) scalable if there is thousands of images and text that needs to be served?
    If anybody would be kind enough to reply, I would be most grateful.
    Thank You.
    Regards
    hatta

    Databases are typically more oriented towards text and number content than binary content, I believe. If you carry images in the database you will need to run them through your code and through your java server before they are displayed. If they are held in a directory they will be called from hrefs in the produced page, which means that they are served by your static server. This is quicker because no processing of the image is required. It also means the Database has to handle massively less data. Depending on the database this should be far quicker to query.
    It is worth noting that it is also quite difficult to actually change mime-types on a page to display a picture in the midst of HTML- the number of enquiries on these pages about this topic should be enough to illustrate this.
    If you give over controls of all the image file handling to your java system (which I do when I write sites like the one you describe) then the actual program knows where to put the images and automatically adds them to the database. The system administrator never needs to touch them. If they want a backup they save the database and the website directory. The second of those should be a standard administrative task anyway, so there is not a huge difference there. The danger of someone accidentally changing the directory structure is no greater than the danger of someone accidentally dropping a database table- it can be minimised by making sure your administrators are competent. Directory structures can be changed back, dropped tables are gone.
    The security claim is slightly negated because you still have to run a webserver. Every program you run on your server is vulnerable to attack but if you are serving web pages you will have a server program that is faster than a database for image handling. You are far more at risk from running FTP or Telnet on your server or (worst of all) trying to maintain IIS.
    The images in directory structure is more scalable because very large databases are more likely to become unstable and carrying a 50k image in every image field rather than 2 bytes of text will make the database roughly 25000 times larger. I have already mentioned the difference in serving methods which stands in favour of recycling images. A static site will be faster than a dynamic site of equivalent size, so where you can, take advantage of that.

  • Which is better, Double Buffering 1, or Double Buffering 2??

    Hi,
    I came across a book that uses a completely different approach to double buffering. I use this method:
    private Graphics dbg;
    private Image dbImage;
    public void update() {
      if (dbImage == null) {
        dbImage = createImage(this.getSize().width, this.getSize().height);
        dbg = dbImage.getGraphics();
      dbg.setColor(this.getBackground());
      dbg.fillRect(0, 0, this.getSize().width, this.getSize().height);
      dbg.setColor(this.getForeground());
      paint(dbg);
      g.drawImage(dbImage, 0, 0, this);
    }that was my method for double buffering, and this is the books method:
    import java.awt.*;
    public class DB extends Canvas {
         private Image[] backing = new Image[2];
         private int imageToDraw = 0;
         private int imageNotDraw = 1;
         public void update(Graphics g) {
              paint(g);
         public synchronized void paint(Graphics g) {
              g.drawImage(backing[imageToDraw], 0, 0, this);
         public void addNotify() {
              super.addNotify();
              backing[0] = createImage(400, 400);
              backing[1] = createImage(400, 400);
              setSize(400, 400);
              new Thread(
                   new Runnable() {
                        private int direction = 1;
                        private int position = 0;
                        public void run() {
                             while (true) {
                                  try {
                                       Thread.sleep(10);
                                  }catch (InterruptedException ex) {
                                  Graphics g = backing[imageNotDraw].getGraphics();
                                  g.clearRect(0, 0, 400, 400);
                                                    g.setColor(Color.black);
                                  g.drawOval(position, 200 - position, 400 - (2 * position), 72 * position);
                                  synchronized (DB.this) {
                                       int temp = imageNotDraw;
                                       imageNotDraw = imageToDraw;
                                       imageToDraw = temp;
                                  position += direction;
                                  if (position > 199) {
                                       direction = -1;
                                  }else if (position < 1) {
                                       direction = 1;
                                  repaint();
              ).start();
         public static void main(String args[]) {
              Frame f = new Frame("Double Buffering");
              f.add(new DB(), BorderLayout.CENTER);
              f.pack();
              f.show();
    }which is better? I noticed smoother animation with the later method.
    Is there no difference? Or is it just a figment of my imagination??

    To be fair if you download an applet all the class files are stored in your .jpi_cache, and depending on how that game requests its graphics sometimes they are stored there to, so really if you have to download an applet game twice, blame the programmer (I've probably got that dead wrong :B ).
    But, what's wrong with Jars. They offer so much more.
    No offence meant by this Malohkan but if you can't organize your downloaded files the internet must really be a landmine for you :)
    Personally I'd be happy if I never seen another applet again, it seems java is tied to this legacy, and to the average computer user it seems that is all java is capable of.
    Admitidly there are some very funky applets out here using lots of way over my head funky pixel tricks, but they would look so much better running full screen and offline.

Maybe you are looking for

  • CDO Form mailer: Using Verio's instructions

    I used verios instructions to create a code as I was instructed. I get this error message: Microsoft VBScript compilation error '800a0401' Expected end of statement /index_files/emailer.asp, line 26 Dim Flds Const cdoSendUsingPort = 2 ---------^ This

  • How can I limit/control the addition of auth. objects to security roles?

    Checking the authorization object S_USER_VAL it seemed that it grants the ability to limit the addition of authorization objects, but I tried using a test ID in sandbox along with a test role, removing the object, creating ranges in order to limit to

  • Pro*C/C++ problem

    I'm just trying to get Pro*C/C++ to work using VC++ on an NT platform (This is my first shot at Pro*C/C++). I can get the code to pre-compile , compile, and link but when I attempt to connect to the database I get an Access Violation in SQLLIB80.dll

  • Return channel implementation in different STB

    Hi everybody! We are testing our application in three different Set Top Boxes (A)Samsung DTB-S500F, B) Philips DTR4600 and C) Humax DTT-4000). According to the standard "On devices whose return channel can be connected or disconnected, connecting a j

  • Syncing Back From MMe to iPhoto

    I have created and shared a gallery on mobile me. I want to allow others to add to it but do not want the new photos synced back to my iPhoto (only to appear in the MMe gallery). Is there any way to do this? Thanks in advance