Saving zip code data with PHP - best practices

I have built my client an application that analyzes uploaded
zip codes for
matches with a standard set of zips. These uploaded zips can
be one at a
time, or a copy/paste from an XLS file (just 5 digit ZIPs).
They are now asking me to save these uploaded zips, and I am
wondering what
would be the best way to do that. My two obvious choices are
1. Write them to an external text file with a
programmatically generated
name, and enter the name in the database, keyed to the user.
2. Write the zips themselves into a glob field in the
database.
I'm inclined to the former, since I don't think there would
ever need to be
any further manipulation of these zip codes, but what do you
think? Are
there other choices I may have overlooked?
Murray --- ICQ 71997575
Adobe Community Expert
(If you *MUST* email me, don't LAUGH when you do so!)
==================
http://www.projectseven.com/go
- DW FAQs, Tutorials & Resources
http://www.dwfaq.com - DW FAQs,
Tutorials & Resources
==================

Dang - sorry. Wrong forum.
Murray --- ICQ 71997575
Adobe Community Expert
(If you *MUST* email me, don't LAUGH when you do so!)
==================
http://www.projectseven.com/go
- DW FAQs, Tutorials & Resources
http://www.dwfaq.com - DW FAQs,
Tutorials & Resources
==================
"Murray *ACE*" <[email protected]> wrote
in message
news:fvfi5j$ig7$[email protected]..
>I have built my client an application that analyzes
uploaded zip codes for
>matches with a standard set of zips. These uploaded zips
can be one at a
>time, or a copy/paste from an XLS file (just 5 digit
ZIPs).
>
> They are now asking me to save these uploaded zips, and
I am wondering
> what would be the best way to do that. My two obvious
choices are -
>
> 1. Write them to an external text file with a
programmatically generated
> name, and enter the name in the database, keyed to the
user.
> 2. Write the zips themselves into a glob field in the
database.
>
> I'm inclined to the former, since I don't think there
would ever need to
> be any further manipulation of these zip codes, but what
do you think?
> Are there other choices I may have overlooked?
>
> --
> Murray --- ICQ 71997575
> Adobe Community Expert
> (If you *MUST* email me, don't LAUGH when you do so!)
> ==================
>
http://www.projectseven.com/go
- DW FAQs, Tutorials & Resources
>
http://www.dwfaq.com - DW FAQs,
Tutorials & Resources
> ==================
>
>

Similar Messages

  • [CS5.5/6] - XML / Data Merge questions & Best practice.

    Fellow Countrymen (and women),
    I work as a graphic designer for a large outlet chain retailer which is constantly growing our base of centers.  This growth has brought a workload that used to be manageable with but two people to a never ending sprint with five.  Much of what we do is print, which is not my forte, but is also generally a disorganized, ad-hoc affair into which I am wading to try to help reduce overall strain.
    Upon picking up InDesign I noted the power of the simple Data Merge function and have added it to our repetoire in mass merging data sources.  There are some critical failures I see in this as a tool going forward for our purposes, however:
    1) Data Merge cannot handle information stored and categorized in a singular column well.  As an example we have centers in many cities, and each center has its own list of specific stores.  Data merge cannot handle a single column, or even multiple column list of these stores very easily and has forced us into some manual operations to concatenate the data into one cell and then, using delimiter characters, find and replace hard returns to seperate them.
    2) Data Merge offers no method of alternate alignment of data, or selection by ranges.  That is to say:  I cannot tell Data merge to start at Cell1 in one column, and in another column select say... Cell 42 as the starting point.
    3) Data merge only accepts data organized in a very specific, and generally inflexible pattern.
    These are just a few limitations.
    ON TO MY ACTUAL DILEMMA aka Convert to XML or not?
    Recently my coworker has suggested we move toward using XML as a repository / delivery system that helps us quickly get data from our SQL database into a usable form in InDesign. 
    I've watched some tutorials on Lynda.com and havent yet seen a clear answer to a very simple question:
    "Can XML help to 'merge' large, dynamic, data sets like a list of 200 stores per center over 40 centers based off of a single template file?"
    What I've seen is that I would need to manually duplicate pages, linking the correct XML entry as I go rather than the program generating a set of merged pages like that from Data Merge with very little effort on my part.  Perhaps setting up a master page would allow for easy drag and drop fields for my XML data?
    I'm not an idiot, I'm simply green with this -- and it's kind of scary because I genuinely want us to proceed forward with the most flexible, reliable, trainable and sustainable solution.  A tall order, I know.  Correct me if I'm wrong, but XML is that beast, no?
    Formatting the XML
    Currently I'm afraid our XML feed for our centers isnt formatted correctly with the current format looking as such:
    <BRANDS>
         <BRAND>
              • BrandID = xxxx
              [Brand Name]
              [Description]
              [WebMoniker]
              <CATEGORIES>
                   <CATEGORY>
                        • xmlns = URL
                        • WebMoniker = category_type
              <STORES>
                   <STORE>
                        • StoreID = ID#
                        • CenterID = ID#
    I dont think this is currently usable because if I wanted to create a list of stores from a particular center, that information is stored as an attribute of the <Store> tag, buried deep within the data, making it impossible to 'drag-n-drop'. 
    Not to mention much of the important data is held in attributes rather than text fields which are children of the tag.
    Im thinking of proposing the following organizational layout:
    <CENTERS>
         <CENTER>
         [Center_name]
         [Center_location]
              <CATEGORIES>
                   <CATEGORY>
                        [Category_Type]
                        <BRANDS>
                             <BRAND>
                                  [Brand_name]
    My thought is that if I have the <CENTER> tag then I can simply drag that into a frame and it will auto populate all of the brands by Category (as organized in the XML) for that center into the frame.
    Why is this important?
    This is used on multiple documents in different layout styles, and since our store list is ever changes as leases end or begin, over 40 centers this becomes a big hairy monster.  We want this to be as automated as possible, but I'd settle for a significant amount of dragging and dropping as long as it is simple and straightforward.  I have a high tollerance for druding through code and creating work arounds but my co-workers do not.  This needs to be a system that is repeatable and understandable and needs to be able to function whether I'm here or not -- Mainly because I would like to step away from the responsibility of setting it up every time
    I'd love to hear your raw, unadulterated thoughts on the subject of Data merge and XML usage to accomplish these sorts of tasks.  What are your best practices and how would you / do you accomplish these operations?
    Regards-
    Robert

    From what I've gleaned through watching Lynda tutorials on the subject is that what I'm hoping to do is indeed possible.
    Peter, I dont disagree with you that there is a steep learning curve for me as the instigator / designer of this method for our team, but in terms of my teammates and end-users that will be softened considerably.  Even so I'm used to steep learning curves and the associated frustrations -- but I cope well with new learning and am self taught in many tools and programs.
    Flow based XML structures:
    It seems as though as long as the initial page is set up correctly using imported XML, individual data records that cascade in a logical fashion can be flowed automatically into new pages.  Basically what you do is to create an XML based layout with the dynamic portion you wish to flow in a single frame, apply paragraph styles to the different tags appropriately and then after deleting unused records, reimport the XML with some specific boxes checked (depending on how you wish to proceed).
    From there simply dragging the data root into the frame will cause overset text as it imports all the XML information into the frame.  Assuming that everything is cascaded correctly using auto-flow will cause new pages to be automatically generated with the tags correctly placed in a similar fashion to datamerge -- but far more powerful and flexible. 
    The issue then again comes down to data organization in the XML file.  In order to use this method the data must be organized in the same order in which it will be displayed.  For example if I had a Lastname field, and a Firstname field in that order, I could not call the Firstname first without faulting the document using the flow method.  I could, however, still drag and drop content from each tag into the frame and it would populate correctly regardless of the order of appearance in the XML.
    Honestly either method would be fantastic for our current set of projects, however the flow method may be particularly useful in jobs that would require more than 40 spreads or simple layouts with huge amounts of data to be merged.

  • Code Set pattern or best practice?

    Hi all,
    I have what I would have thought to be a common problem: the best way to model and implement an organization's code sets. I've Googled, and I've forumed - without success.
    The problem domain is this: I'm redeveloping an existing application, which currently represents it's vast array of code sets using a seperate table for each set. There are currently 180+ of these tables. Not a very elegant approach at present. The majority of these code sets are what I would class as "simple" - a numeric value associated with a textual description - eg 1 = male, 2 = female, or 1 "drinks excessively", 2 "drinks sometimes" ... etc. Most of these will just be used to associate a value with a combo box selected value.
    There are also what I would class as "complex" code sets, which may have 1..n attributes (ie not just a numeric and text value pair). An example of this (not overly complex) is zip code, which has a unique identifier, the zip code itself (which may change - hence the id), a locality description, and a state value.
    Is there a "best practice" approach or pattern which outlines the most efficient way of implementing such code sets? I need to consider performance vs the ability to update the code set values, as some of them may change from time to time without notice at the discretion of government departments.
    I had considered hard coding, creating classes to represent each one, holding them in xml files, storing in the database etc, but it would seem that making the structure generic enough to cater to varying numbers of attributes and their associated datatypes will be at the cost of performance.
    Any suggestions would be greatly appreciated.
    Thanks.
    Paul C.

    Hi Saish,
    Thanks for your response. Yes, this approach is what
    I had considered - I'll be using Hibernate so these
    values will be cached etc.
    I guess my main concern is reducing the huge number
    of very small tables in use. I was thinking about
    this some more, and for the simple tables was
    thinking of 2 tables: 1 (eg "CODE_SET") to describe
    the code set (or ref table etc) in question, the
    second to hold the values. This way 80 odd tables
    would be reduced to 2. Not sure what's best here -
    simpler ER diagram or more performance!Tables...
    Enumeration
    - EnumerationId
    - EnumerationName
    - EnumerationAbbreviation
    EnumerationValues
    - EnumerationId
    - ValueIndex
    - ValueName
    - ValueAbbreviation
    The above allows the names to change.
    You can add a delete flag if values might be deleted but old records need to be maintianed.
    Convention: In the above I specifically name the second table with a plural because it holds a collection of sets (plural) rather than a single set.
    In the first table the id is the key. In the second the id and the index are the key. The ids are unique (of course). The enumeration name should be unique in the first table. In the second table the EnumerationId and value name should be unique.
    Conversely you might choose to base uniqueness on the abbreviation rather than the name.
    The Name vs Abbreviation are used for reporting/display purposes (long name versus short name).
    It is likely that for display/report purposes you will have to deal with each of the sets uniquely rather than a group. Ideally (strongly urged) you should create something that autogenerates a java enumeration (specific with 1.5 or general with 1.4) that uses the id values and perhaps the indexes as the values and the names are generated from the abbreviations. This should also generate the database load table for the values. Obviously going forward care must be taken in how this is modified.

  • Data warehousing question/best practices

    I have been given the task of copying a few tables from our production database to a data warehousing database on a once-a-day (overnight) basis. The number of tables will grow over time; currently it is 10. I am interested in not only task success but also best practices. Here's what I've come up with:
    1) drop the table in the destination database.
    2) re-create the destination table from the script provided by SQL Developer when you click on the 'SQL' tab while you're viewing the table.
    3) INSERT INTO the destination table from the source table using a database link. Note: I am not aware of any columns in the tables themselves which could be used to filter added/deleted/modified rows only.
    4) After data import, create primary key and indexes.
    Questions:
    1) SQL Developer included the following lines when generating the table creation script:
    <table creation DDL commands>
    then
    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE (INITIAL 251658240 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBLSPC_PGROW"
    it generated this code snippet for the table, the primary key and every index.
    Is this necessary to include in my code if they are all default values? For example, one of the indexes gets scripted as follows:
    CREATE INDEX "XYZ"."PATIENT_INDEX" ON "XYZ"."PATIENT" ("Patient")
    -- do I need the following four lines?
    PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 60817408 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBLSPC_IGROW"
    2) Anyone with advice on best practices for warehousing data like this, I am very willing to learn from your experience.
    Thanks in advance,
    Carl

    I would strongly suggest not dropping and recreating tables every day.
    The simplest option would be to create a materialized view on the destination database that queries the source database and to do a nightly refresh of that materialized view. You could then create a materialized view log on the source table and then do an incremental refresh of the materialized view.
    You can schedule the refresh of the materialized view either in the materialized view definition, as a separate job, or by creating a refresh group and adding one or more materialized views.
    Justin

  • EDI - 753Routing Request and 754 Routing Instructions transactions- what SAP ECC6 EDI output type, msg type, msg code, basis idoc is best practice to generate the idoc?

    One of our Trading Partner wishes to implement the 753 Routing Request and 754 Routing Instructions.  Can anyone give me best practice answer the following Questions?  We are on SAP ECC6 R3
    What Application? i.e. V2, V7?
    What output condition to use?  i..e. LALE, LAT2, SEDI????
    What Message type?  i.e SHPADV?
    What Basic Idoc Type? i.e SHPMNT05?
    What Message Code?  i.e. SHPM?
    What Process Code?  i.e. SHPM??
    What Function Module? i.e. IDOC_OUTPUT_SHPMNT??
    Does the SAP Transpotation Module have to be configured for this to be implemented?
    Your comments are greatly appreciated, we have until Nov 1 to be compliant?

    Good Morning,
    While I did not get any responses from my question, yes, I was able to determine what needed to be configured.   I hope the below will help you: This is for SAP ECC6
    The output application for the 753 Routing Request is "V7"
    The output type is "SEDI"
    Transmission medium is "6"
    The Basic Idoc type is "Shipmnt05"
    when setting up your Partner Profile (WE20) add:Message Partner Role "SP" with Message type "SHPADV', Message coed "753", Basic Type "SHPMNT05", Message Control add Application "V7", Message type "SEDI", Process Code "SHPM".
    also http://scn.sap.com/thread/698368 pages 14 and 15 were helpful.
    I have not configured the Inbound 754.  For now the inbound 754 Routing Instructions will be emailed to our traffic department.
    Thank you,
    Have a great day,
    jane

  • Dealing with Drobo (best practices?)

    I have two second generation Data Robotics Drobos, and have been using them under 10.6 on a MacBook via USB. Like many Drobo users, I have had various "issues" over the years, and even suffered 1TB of data loss probably related to the USB eject bug that was in Mac OS X 10.6.5-10.6.7. I have also used the Drobos on a Mac with FireWire.
    My Drobos are set up as 1TB volumes, so my 4x2TB unit shows six 1TB volumes. Using DiskWarrior on some of my volumes has reported "speed reduced by disk malfunction" and DW was unable to rebuild the directory. I fear for my data, so I have been in the process of moving data away from the drive and starting fresh.
    I would like to use this discussion to see what "best practices' others have come up with when dealing with a Drobo on a Mac.
    When I first set up the Drobo, the documentation stated that the unit would take longer to startup if using one big partition, so I chose the smallest value -- 1TB. This initially gave me a few Drobo volumes to use, and as I swapped in larger hard drives, Drobo would start adding more 1TB volumes. I like this approach, since it lets me unmount volumes I am not using (so iMovie does not have to find every single "iMovie Events" I have across 12TB of drives).
    This was also a good way to protect my data. When my directory structure crashed, and was unrepairable, I only lost 1TB of data. Had that happened on a "big" volume Drobo, I would have lost everything.
    Data Robotics own KB articles will tell you to never use Disk Utility to partition a Drobo, but other KB articles say this is what you must do to use TimeMachine... Er? And, under 10.7, they now say don't do that, even for TimeMachine. Apparently, if your parititoned under 10.6 or earlier, you can still use your TimeMachine backup under 10.7, but if you are 10.7 only, you have to use some Time Tamer utility and create a sparsebundle image -- and then you cannot browse TimeMachine backups (what good is that, then?).
    It's a mess.
    So I am looking for guidance, tips, suggestions, and encouragement. I will soon be resetting one of my Drobos and starting fresh, then after I get everything working again, I will move all my data over to it, and reset my second Drobo.

    I have been trying to do either.
    right now i have the images download when the cell is created and then stored into an NSMutable Array. the array is initially populated with a NSString value of the url to the image. I then test to see if the object at the current TableView index is a UIIimage, If not i download the image and replace the existing NSString with the UIImage in the array.
    -(UIImage*) newUIImageWithURLString:(int)urlString
    if (![[imgarr objectAtIndex:urlString] isKindOfClass: [UIImage class]])
    NSLog(@"image not there");
    UIImage *img2get = [[UIImage alloc] initWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:[imgarr objectAtIndex:urlString]]]];
    [imgarr replaceObjectAtIndex:urlString withObject:img2get];
    [img2get release];
    return [imgarr objectAtIndex:urlString];
    Works fairly well but does stall the scrolling when i download the image because i am calling it like this in the cellForRowAtIndexPath.
    UIImage *cellimage = [self newUIImageWithURLString:indexPath.row];
    cell.image = cellimage;
    I am looking into using a background process for the actual downloading so as not to interfere with the table operations. have you any thoughts on the best way to this?

  • Running bgp with provider, best practices

    Hi all
    We have recently got a link from a provider to give us point to point between 2 offices, the provider is running bgp to us.
    What best practices should I do when configuring this! At the moment we have connectivity, with basic neighbour statements etc.
    What things should I so for security and to protect my environment from the provider etc?
    Cheers
    Carl

    Hi,
    This is a very well concern for a provider and Customer as CE-PE connectivity is the connection between two different entities. But when we talk about the CE-PE connection what all we can prevent:
    1. Securing BGP neighbor ship with enabling password
    2. Preventing Excessive Route flooding
    3. Securing the date over an MPLS VPN network
    For detail on these refer the below document:
    http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/WAN_and_MAN/L3VPNCon.html#wp309784
    Hope it answers your query.
    Thanks & Regards
    Sandeep

  • Code of conduct or best practices?

    Hello, I'd like to know if there is any document of best practices or a code of conduct to work with NWDI made by SAP or any other expert (blogs, articles....).
    I would really appreciate your help. Thanks in advance.

    Refer to 'Best Practices and How-To-Guides for NWDI-based Development' section in
    <<NWDI Resources>> [original link is broken]
    Following blog might be interesting for you:
    <</people/guenter.schiele/blog/2005/12/21/best-practices-for-running-the-nwdi
    Regards,
    Bhagya

  • HR Master Data conversion-SAP Best Practices

    Hello there,
    We would like to use the SAP Best Practices for HR Master Data conversion. 
    Now we want leverage the SAP Best practices to convert the Master data.  Could any one explain in detail how to do the same.
    How to install the Best Practices only to the extent of the Data conversion.  We don't want to use the rest of the Best Practicies.
    I know there are some notes out there. 
    Any help on the above highly appriciated.

    HI,
    I am not v sure if u can install only the required component. But there would be some pre requisites for every installation.
    It will be clearly mentioned in the base line
    Also Check if its available for the country which ur currently working...
    Use the ECATT: Test Configuratio & Test Scripts
    Pls revert in case u need further more details..

  • Securing with NAT - Best Practice ?

    Hi,
    It is forbidden to do NAT Exempt from Internal to DMZ ?
    I hear there is a compliance in banking that 2 server who needs to communicate but its forbidden to know each other ip address ?
    How about NAT as second layer or firewall ?
    What is best practice to secure enterprise network from NAT point of view ?
    Thx

    Hello Ibrahim,
    No, not at all, that is not a restriction at all. You can do it if needed.
    Now looks like in your enviroment is a requirement that this 2 servers communicate with each other but they will not know each other IP address.
    Then NAT is your friend as will satisfy the requirement you are looking for.
    Well I do not consider NAT to be a security measure as for me it does not perform any inspection, any rule set any policy ,etc but I can ensure you there are a lot of people that think about it as a security measure.
    I see it as an IP service that allows us to preserve the IP address space.
    For more information about Core and Security Networking follow my website at http://laguiadelnetworking.com
    Any question contact me at [email protected]
    Cheers,
    Julio Carvajal Segura

  • Using XML with Flex - Best Practice Question

    Hi
    I am using an XML file as a dataProvider for my Flex
    application.
    My application is quite large and is being fed a lot of data
    – therefore the XML file that I am using is also quite large.
    I have read some tutorials and looked thorough some online
    examples and am just after a little advice. My application is
    working, but I am not sure if I have gone about setting and using
    my data provider in the best possible (most efficient) way.
    I am basically after some advice as to weather I am going
    about using (accessing) my XML and populating my Flex application
    is the best / most efficient way???
    My application consists of the main application (MXML) file
    and also additional AS files / components.
    I am setting up my connection to my XML file within my main
    application file using HTTPService :
    <mx:HTTPService
    id="myResults"
    url="
    http://localhost/myFlexDataProvider.xml"
    resultFormat="e4x"
    result="myResultHandler(event)" />
    and handling my results with the following function:
    public function myResultHandler(event:ResultEvent):void
    myDataFeed = event.result as XML;
    within my application I am setting my variable values by
    firstly delacring them:
    public var fName:String;
    public var lName:String;
    public var postCode:string;
    public var telNum:int;
    And then, giving them a value by “drilling” into
    the XML, E;g:
    fName = myDataFeed.employeeDetails.contactDetails.firstName;
    lName = myDataFeed.employeeDetails.contactDetails.lastName;
    postCode =
    myDataFeed.employeeDetails.contactDetails.address.postcode;
    telNum = myDataFeed.employeeDetails.contactDetails.postcode;
    etc…
    Therefore, for any of my external (components in a different
    AS file) components, I am therefore referencing there values using
    Application:
    import mx.core.Application;
    And setting the values / variables within the AS components
    as follows:
    public var fName:String;
    public var lName:String;
    fName =
    Application.application.myDataFeed.employeeDetails.contactDetails.firstName;
    lName =
    Application.application.myDataFeed.employeeDetails.contactDetails.lastName;
    As mentioned this method seems to work, however, is it the
    best way to do it??? :
    - Connect to my XML file
    - Set up my application variables
    - Give my variables values from the XML file ……
    Bearing in mind that in this particular application there are
    many variable that need to be set and there for a lot of lines of
    code just setting up and assigning variables values from my XML
    file.
    Could someone Please advise me on this one????
    Thanks a lot,
    Jon.

    I don't see any problem with that.
    Your alternatives are to skip the instance variables and
    query the XML directly. If you use the values in a lot of places,
    then the Variables will be easier to use and maintain.
    Also, instead of instance variables, you colld put the values
    in an "associative array" (object/hashtable), or in a dictionary.
    Tracy

  • How to get system time and date with PHP

    Dear Mr.Craig,
      Thanx a lot. We are running SRM 5.0 (RAMP - Implementation).
      My initial requirement is to write a server-side script to display server date and time. Could you give more inside on how to achieve it?
    Regards,
    Deva.

    Perhaps that will help.
    [code]
    <html>
    <h1>Access System time and date</h1>
    <?
         // saprfc-class-library     
         require_once("saprfc.php");
         $sap = new saprfc(array(
       "logindata"=>array(
       "ASHOST"=>"localhost"          // application server
       ,"SYSNR"=>"00"                    // system number
          ,"CLIENT"=>"000"               // client
          ,"USER"=>"bcuser"               // user
          ,"PASSWD"=>"minisap"          // password
         ,"show_errors"=>false               // let class printout errors
         ,"debug"=>false)) ;                     // detailed debugging information
         $result=$sap->callFunction("MSS_GET_SY_DATE_TIME",
            array(     array("EXPORT","SAPTIME",array()),
               array("EXPORT","SAPDATE",array())));
         if ($sap->getStatus() == SAPRFC_OK) {
        echo "Time: ".$result["SAPTIME"];
        echo "<br>Date: ".$result["SAPDATE"];
        echo "<br>or<br>";
        echo "Server is showing: "
             .substr($result["SAPDATE"], 0, 4)
             ."-".substr($result["SAPDATE"], 4, 2)
             ."-".substr($result["SAPDATE"], 6, 2)
             ." and "
             .substr($result["SAPTIME"], 0, 2)
             .":".substr($result["SAPTIME"], 2, 2)
             .":".substr($result["SAPTIME"], 4, 2);
         } else {
              $sap->printStatus();
         $sap->logoff();
    ?>
    [/code]

  • Archiving data retention period (best practice)

    Hi,
    Can anybody provide the standard FI-CO data retention period I mean for how long data from the following table has to be retained: 
    BKPF, BSAD, BSAK, BSAS, BSIS
    Best business practice.
    Thanks in advance
    Joseph

    Hi,
    I would not recommend to change this setting. One major problem you get will be that the archiving function will delete data from database (which has been transferred to the archive). So far so good, that´s one of the archiving features
    But it sometimes happens that you have to open an already closed fiscal year again. And if you open this last closed fiscal year again, the realtime transactions could fail now if data (transactions, etc.) is missing which exists only in the archive at that point of time.
    By the way, you can have a look at SAP note 389920, in there the archiving functionality is documented for FI-AA application.
    Regards,
    Markus

  • Collapsed Data Center Tier - Best Practice

    Hey guys,
    I'm working with a company who's doing a Data Center build-out. This is not a huge build out and I don't believe I really need a 2 tier design (access, core/aggregation). I'm looking for a 1 tier design. I say this because they only really have one rack of hosts - and we are not connected to a WAN or campus network - we are a dev shop (albeit a pretty damn big dev shop) who hosts internet sites and web applications to the public. 
    My network design relies heavily on VRF's. I treat every web application published to the internet as it's town "tenant" with one leaked route which is my managment network so I have any management servers ( continues deployment, monitoring, etc...) sitting in this subnet that is leaked. Each VRF has their own route to a virtual firewall context of their own and out to the internet. 
    Right now we are in a managed datacenter. I'm going to be building out their own switching environment utilizing the above design and moving away from the managed data center. That being said I need to pick the correct switches for this 1 tier design. I need a good amount of 10gbe port density (124 ports minimum). I was thinking about going with 4 5672UP or 4 C3064TQ-10GT - these will work as both my access and core (about 61 servers, one fiber uplink to my corporate network, and one fiber uplink to a firewall running multiple device contexts via multiple vlans) 
    That being said - With the use of VRFs, VLAN, and MP-BGP (used to leak my routes) what is the best redundancy topology for this design. If I was using catalyst 6500's I would do VSS and be done with it - but I don't believe vPC on the nexus switches traffic and is really more for a two tier model (vPC on two cores, aggregation/access switch connects up to both cores but it looks like one.) What I need to accomplish sounds to me that I'm going to be doing this the old fashion way , running a port channel between each switch, and hopefully using a non STP method to avoid loops. 
    Am I left with any other options? 

    ISP comes into the collapsed core after a router. A specific firewall interface (firewall is in multi context mode) sits on the "outside" vlan specific to each VRF. 

  • HCM Master data upload sequence & best practices

    Experts,
    What would be the best method and recommended sequence to upload HCM master data into the below infotypes?
    0,1,2,3   6,7,8,9   207,208,209,210  (payroll)
    21,167,168,169,170,171 and 3.series (Benefits)
    PA0795
    PA2006
    PA2012
    PBO795
    T529T
    T530T
    Please advice.
    Thanks in advance.
    NW

    Hi,
    The best method to mass upload is LSMW
    the sequence will be
    First you need to create the master data so the Action tables need to be configured first
    T529T
    T530T
    also other related PA config need to be completed
    Then when you will start uploading data the sequence will be
    0, 1, 2 , 6, 7, 8, 9 , 207, 208, 209, 210, 21, 171, 167, 168 169, 170, 2006, 2012, 795
    The benefits features (BAREA, BENGR, BSTAT) also need to be configured with all other benefits related config prior to uploading benefits information
    Some other imp. features like LGMST, TARIF, ABKRS, SCHKZ etc.. also need to be configured prior to uploading the employee master data.
    Hope this will be of help
    Regards,
    Guds

Maybe you are looking for