Best practices to manage large array inside an event loop?

My program is a multi-event structure, where each event corresponds to a boolean value change by the user. The program begins by reading a .csv file, typically, 50000 rows and 100 columns, of string values. I add some columns, predict the final array size and write to two array controls - input and output. The input array always shows the original data as read from the file. The output array shows the current state of data. The user then presses button that triggers events, in each event I read the output array value, manipulate data, and write back to output array. If an event has to manipulate only the 7th column (for example), then I index that column, manipulate it, and replace that column in the control. I am using 'Value' property node to read and write data to the output array. I NEVER use build array function. I have to update the control at the end of each event to allow the user to see effect of the event.
Typically, my program runs fast. I can go through all events and even repeat them at a good speed. However, when I read .csv file second time, the program slows down, third time even slower. When I read the file, I predict the final size of input and output array and refresh the controls with new values from the .csv file. What am I doing wrong?
1. Should I use shift register in my event structure to read and update output array instead of using property node? Or, should I read and write using a Reference to the control?
2. When I read a control using property node, am I creating a second copy of the data?
3. I believe each time I read or write a control or indicator, LabVIEW switches to threads. This may be causing some delay but I doubt is the reason for my troubles. Right or wrong?
4. Is there a way to deallocate all input and output array control memory? Would it help to write a null array before I refill the controls with a new .csv file?
I can add screenshots of my code. I cannot provide the complete program due to security issues.
Using LV 8.5 Professional on Windows 7 64-bit.
Thanks,
Gurdas
Gurdas Singh
PhD. Candidate | Civil Engineering | NCSU.edu

Crossulz, I updated my code and replaced all property nodes of the output array indicator with shift register. I see some speed/memory improvements (more about that later).
1. Sorry about the wrong terminology. My output array is an indicator and not a control. I was using the word since I use this array to control the saved output. I am not sure I understood what you meant by writing right before the event structure. Did you mean having the terminal between the outer while loop (which has the shift register) and the event structure? Wouldn't that have it update the indicator each time the while loop runs? For the lack of a better solution, I created an event to update the indicator using shift register value. See attached screenshot, "Gurdas_3*.jpg"
2. Okay.
2a. Wires won't work because control and indicators are being manipulated in different events. Shift registers could work, but that means having many shift register, one for each control or indicator. I've read about Feedback nodes, not sure if they exist in LV 8.5.1. If they do, is that an option?
4. See attached screenshot "Gurdas_4*.jpg". I intiliaze the shift register with an empty array and then update it with the data when file is read. Is this what you meant? I still have an input data indicator that I am writing to. But this is one time; this indicator is not read or updated anywhere else.
4a. Notice the 3 branchout data wires just after the file read? Is that creating 3 copies by any chance?
5. See attached screenshot "Gurdas_5*.jpg". This is from the most computation intensive event that uses current data from the shift register. I am indexing many columns out. Is that creating extra copies of the columns?
About the improvements from using shift register. LV's memory usage as reported by Task Manager:
A. Before file read 70 MB
B. After file read 750 MB (includes writing one time to the input data indicator; if I do not write, memory usage is 450 MB)
C. After completing all steps done but never writing to output iindicator array 1.1 GB
D. After completing all steps done and writing to output iindicator array 1.5 GB (this used to be 2 GB previously)
The file I am reading is 47000 rows and 78 columns. I read the file as string. The ondisk size of the file is less than 20 MB.
6. Comparing steps A and B above, why does memory use increase so much more than the actual file size being read?
Thanks for helping me here!
Gurdas Singh
PhD. Candidate | Civil Engineering | NCSU.edu
Attachments:
Gurdas_03_UpdateOutputArray.JPG ‏41 KB
Gurdas_04_New-ReadCSV.JPG ‏125 KB
Gurdas_05_IndexArray.JPG ‏77 KB

Similar Messages

  • What are best practices for managing my iphone from both work and home computers?

    What are best practices for managing my iphone from both work and home computers?

    Sync iPod/iPad/iPhone with two computers
    Although it isn't possible to sync an Apple device with two different libraries it is possible to sync with the same logical library from multiple computers. Each library has an internal ID and when iTunes connects to your iPod/iPad/iPhone it compares the local ID with the one the device normally syncs with. If they are the same you can go ahead and sync...
    I have my library cloned to a small 1Tb USB drive which I can take between home & work. At either location I use SyncToy 2.1 to update the local copy with the external drive. Mac users should be able to find similar tools. I can open either of the local libraries or the one on the external drive and update the media content of my iPhone. The slight exception is Photos which normally connects to a specific folder on a specific machine, although that can easily be remapped to the current library if you create a "Photos" folder inside the iTunes Media folder so that syncing the iTunes folders keeps this up to date as well. I periodically sweep my library for new files & orphans withiTunes Folder Watch just in case I make changes at one location but then overwrite the library with a newer copy from the other. Again Mac users should be able to find similar tools.
    As long as your media is organised within an iTunes Music or Tunes Media folder, in turn held inside the main iTunes folder that has your library files (whether or not you let iTunes keep the media folder organised) each library can access items at the same relative path from the library folder so the library can be at different drives/paths on different machines. This solution ensures I always have adequate backups of my library and I can update my devices whenever I can connect to the same build of iTunes.
    When working with an iPhone earlier builds of iTunes would remove any file not physically present in the local library, even if there was an entry for it, making manual management practically redundant on the iPhone. This behaviour has been changed but it will still only permit manual management with a library that has the correct internal ID. If you don't want to sync your library between machines on a regular basis just copy the iTunes Library.itl file from the current "home" machine to any other you want to use, then clean out the library entires and import the local content you have on that box.
    tt2

  • Best practices for managing Movies (iPhoto, iMovie) to IPhone

    I am looking for some basic recommendations best practices on managing the syncing of movies to my iPhone. Most of my movies either come from a digital camcorder into iMovie or from a digital Camera into iPhone.
    Issues:
    1. If I do an export or a share from iPhoto, iMovie, or QuickTime, what formats should I select. I've seem 3gp, mv4.
    2. When I add a movie to iTunes, where is it stored. I've seen some folder locations like iMovie Sharing/iTunes. Can I copy them directly there or should I always add to library in iTunes?
    3. If I want to get a DVD I own into a format for the iPhone, how might I do that?
    Any other recommedations on best practices are welcome.
    Thanks
    mek

    1. If you type "iphone" or "ipod" into the help feature in imovie it will tell you how.
    "If you want to download and view one of your iMovie projects to your iPod or iPhone, you first need to send it to iTunes. When you send your project to iTunes, iMovie allows you to create one or more movies of different sizes, depending on the size of the original media that’s in your project. The medium-sized movie is best for viewing on your iPod or iPhone."
    2. Mine appear under "movies" which is where imovie put them automatically.
    3. If you mean movies purchased on DVD, then copying them is illegal and cannot be discussed here.
    From the terms of use of this forum:
    "Keep within the Law
    No material may be submitted that is intended to promote or commit an illegal act.
    Do not submit software or descriptions of processes that break or otherwise ‘work around’ digital rights management software or hardware. This includes conversations about ‘ripping’ DVDs or working around FairPlay software used on the iTunes Store."

  • What is the best practice of deleting large amount of records?

    hi,
    I need your suggestions on best practice of deleting large amount of records of SQL Azure regularly.
    Scenario:
    I have a SQL Azure database (P1) to which I insert data every day, to prevent the database size grow too fast, I need a way to  remove all the records which is older than 3 days every day.
    For on-premise SQL server, I can use SQL Server Agent/job, but, since SQL Azure does not support SQL Job yet, I have to use a Web job which scheduled to run every day to delete all old records.
    To prevent the table locking when deleting too large amount of records, in my automation or web job code, I limit the amount of deleted records to
    5000 and batch delete count to 1000 each time when calling the deleting records stored procedure:
    1. Get total amount of old records (older then 3 days)
    2. Get the total iterations: iteration = (total count/5000)
    3. Call SP in a loop:
    for(int i=0;i<iterations;i++)
       Exec PurgeRecords @BatchCount=1000, @MaxCount=5000
    And the stored procedure is something like this:
     BEGIN
      INSERT INTO @table
      SELECT TOP (@MaxCount) [RecordId] FROM [MyTable] WHERE [CreateTime] < DATEADD(DAY, -3, GETDATE())
     END
     DECLARE @RowsDeleted INTEGER
     SET @RowsDeleted = 1
     WHILE(@RowsDeleted > 0)
     BEGIN
      WAITFOR DELAY '00:00:01'
      DELETE TOP (@BatchCount) FROM [MyTable] WHERE [RecordId] IN (SELECT [RecordId] FROM @table)
      SET @RowsDeleted = @@ROWCOUNT
     END
    It basically works, but the performance is not good. One example is, it took around 11 hours to delete around 1.7 million records, really too long time...
    Following is the web job log for deleting around 1.7 million records:
    [01/12/2015 16:06:19 > 2f578e: INFO] Start getting the total counts which is older than 3 days
    [01/12/2015 16:06:25 > 2f578e: INFO] End getting the total counts to be deleted, total count:
    1721586
    [01/12/2015 16:06:25 > 2f578e: INFO] Max delete count per iteration: 5000, Batch delete count
    1000, Total iterations: 345
    [01/12/2015 16:06:25 > 2f578e: INFO] Start deleting in iteration 1
    [01/12/2015 16:09:50 > 2f578e: INFO] Successfully finished deleting in iteration 1. Elapsed time:
    00:03:25.2410404
    [01/12/2015 16:09:50 > 2f578e: INFO] Start deleting in iteration 2
    [01/12/2015 16:13:07 > 2f578e: INFO] Successfully finished deleting in iteration 2. Elapsed time:
    00:03:16.5033831
    [01/12/2015 16:13:07 > 2f578e: INFO] Start deleting in iteration 3
    [01/12/2015 16:16:41 > 2f578e: INFO] Successfully finished deleting in iteration 3. Elapsed time:
    00:03:336439434
    Per the log, SQL azure takes more than 3 mins to delete 5000 records in each iteration, and the total time is around
    11 hours.
    Any suggestion to improve the deleting records performance?

    This is one approach:
    Assume:
    1. There is an index on 'createtime'
    2. Peak time insert (avgN) is N times more than average (avg). e.g. supposed if average per hour is 10,000 and peak time per hour is 5 times more, that gives 50,000. This doesn't have to be precise.
    3. Desirable maximum record to be deleted per batch is 5,000, don't have to be exact.
    Steps:
    1. Find count of records more than 3 days old (TotalN), say 1,000,000.
    2. Divide TotalN (1,000,000) with 5,000 gives the number of deleted batches (200) if insert is very even. But since it is not even and maximum inserts can be 5 times more per period, set number of deleted batches should be 200 * 5 = 1,000.
    3. Divide 3 days (4,320 minutes) with 1,000 gives 4.32 minutes.
    4. Create a delete statement and a loop that deletes record with creation day < today - (3 days ago - 3.32 * I minutes). (I is the number of iterations from 1 to 1,000)
    In this way the number of records deleted in each batch is not even and not known but should mostly within 5,000 and even you run a lot more batches but each batch will be very fast.
    Frank

  • Best Practice for Managing a BPC Environment?

    My company is currently running a BPC 5.1 MS environment and will soon be upgrading to version 7.0 MS.  I was wondering if there is a white paper or some guidance that anyone can give with regard to the best practice for managing a BPC environment.  Which brings to light several questions in my mind:
    1.  Which department(s) in a company should u201Cownu201D the BPC application? 
    2. If both, whatu2019s SAPu2019s recommendation for segregation of duties?
    3. What roles should exist within our company to manage BPC?
    4. What type(s) of change control is SAPu2019s u201CBest Practiceu201D?
    We are currently evaluating the best way to manage the system across multiple departments, however there is no real business ownership in the system, which seems to be counter to the reason for having BPC as a solution in the first place.
    Any guidance on this would be very much appreciated.

    My company is currently running a BPC 5.1 MS environment and will soon be upgrading to version 7.0 MS.  I was wondering if there is a white paper or some guidance that anyone can give with regard to the best practice for managing a BPC environment.  Which brings to light several questions in my mind:
    1.  Which department(s) in a company should u201Cownu201D the BPC application? 
    2. If both, whatu2019s SAPu2019s recommendation for segregation of duties?
    3. What roles should exist within our company to manage BPC?
    4. What type(s) of change control is SAPu2019s u201CBest Practiceu201D?
    We are currently evaluating the best way to manage the system across multiple departments, however there is no real business ownership in the system, which seems to be counter to the reason for having BPC as a solution in the first place.
    Any guidance on this would be very much appreciated.

  • What's the best practice to manage the page file?

           
    We have one Hyper-v Server running windows 2012 R2 with 128 GB RAM and 2 drives (C and D). It setup Automatically manage page file size for all drives. What's the best practice to manage the page file?
    Bob Lin, MCSE & CNE Networking, Internet, Routing, VPN Networking, Internet, Routing, VPN Troubleshooting on http://www.ChicagoTech.net How to Install and Configure Windows, VMware, Virtualization and Cisco on http://www.HowToNetworking.com

    For Hyper-V systems, my general recommendation is to set the page file to 1-4 GB. This allows for a mini-dump should something happen. 99.99% of the time, Microsoft will be able to figure out the cause of the problem from the mini-dump. It does not make
    sense on a Hyper-V system to set aside enough space to capture all the memory on the system because only a very small portion of that memory is used by the parent partition. Most of the memory is under control of the individual VMs.
    Yes, I had one of the Hyper-V product group tell me that I should let Windows manage it.  A couple of times I saw space on my system disk disappear because the algorithm decided it wanted all the space for the page file.  Made it so I couldn't
    patch my systems.  Went back in and set the page file to 1-4 GB and have not had any issues since.
    . : | : . : | : . tim

  • General Discussion - Best practice to manage Process order

    Hi Experts,
    Which is the best practice to manage process orders ?
    1. Quantity Change - I can make quantity adjustment in R3 and APO.
    2. Source Change - I can make a version change from order header . Also i can make a source change in APO by selecting a different PPM. Which is the best option.
    3. Re Read Master Data - Best practice to read master data is from R3 or APO ?
    I feel for all the above scenarios process ordes should always managed in R3. But still wondering why we have the same flexibility in APO too ?
    Can

    Hello,
    we are just migrating from 4.6c to ECC 6.0 and I have a couple of workflows to adopt.
    For background steps I defined in the corresponding BOR methods an exception to be fired when no result is available (e.g. no mail address available). Normally, I defined them as temporary errors.
    I activated in the WI outcome section the line for this exception and so the workflow processed this branch when the exception appeared. It worked fine.
    Now, in ECC 6.0, the same workflow get stuck in the WI. The exception is fired (I can see it in the log as "Error message"), but the WI is still in status "in process". It doesn't continue with the error outcome branch.
    Is this a new logic in ECC 6.0? Do you have any idea what to do? I used this logic some dozent times in different methods and workflows and it gives me a headache if I have to change everything ...
    Thank you!
    Best regards,
    Thomas

  • What is a best practice for managing a large amount of ever-changing hyperlinks?

    I am moving an 80+ page printed catalog online. We need to add hyperlinks to our Learning Management System courses to each reference of a class - there are 100s of them. I'm having difficulty understanding what the best practice is for consistent results when I need to go back and edit (which we will have to do regularly).
    These seem like my options:
    Link the actual text - sometimes when I go back to edit the link I can't find it in InDesign but can see it's there when I open up the PDF in Acrobat
    Draw an invisible box over the text and link it - this seems to work better but seems like an extra step
    Do all of the linking in Acrobat
    Am I missing anything?
    Here is the document in case anyone wants to see it so far. For the links that are in there, I used a combination of adding the links in InDesign then perfecting them using Acrobat (removing additional links or correcting others that I couldn't see in InDesign). This part of the process gives me anxiety each month we have to make edits. Nothing seems consistent. Maybe I'm missing something obvious?

    what exatly needs to be edited, the hyperlink or content or?

  • Best practice for management of Portal personalisation

    Hi
    I've been working in our sandpit client figuring out all portal personalisation and backend/frontend customisation for ESS (with some great help from SDN). However I'm not clear on what best practice is when doing this for 'real'.
    I've read in places that objects to be personalised should be copied first. Elsewhere I've read about the creating of a new portal desktop for changes to the default framework page etc
    What I'm not clear on is the strategy for how you manage and control all the personalisation changes.  For example, if you copy an iview and personalise where do you copy it to? Do you create "Z" versions of all the folders of the standard content or something else?
    Do you copy everything at the lowest possible object level and copy or highest?
    Implications of applying support or enhancement packs?
    We're going pretty vanilla to begin with in that there will just be a single ESS role so things should be fairly simple from that point of view, but I've trawled all around SDN without being able to find clear guidance on the overall best pratice process for personalisation.
    Can anyone help?
    Regards
    Phil
    Edited by: Phil Martin on Feb 15, 2010 6:47 PM

    Hi Phil,
    To keep things organized I would create an new folder (so don't use the desktop one). The new folder should sit under Portal Content. In that folder create a folder for ESS, then inside that one for iViews (e.g. COMPANY_XYZ >>> ESS >>> iViews) then copy via delta link the iViews you want from the standard SAP folders.
    Then you will have to create another folder for your own roles (e.g. ESS >>> Roles). Inside this folder create a new role and add your iViews (again via delta link). Or you could just copy one of the standard SAP roles into this folder and not bother with copying the iViews. What you do depends on how many changes you intend to make, sometimes if the changes are only small it is easier to copy the entire SAP role (via delta link) and make your changes.
    You should end up with something link this in your PCD:
    Portal Content
         COMPANY_XYZ
              ESS
                   iViews
                        iView1
                        iView2 etc..
                   Roles
                        Role1
                        Role2 etc...
    Then don't forget to assign your new roles to your users or groups.
    Hope that makes sense.
    BRgds,
    Simon
    Edited by: Simon Kemp on Feb 16, 2010 3:56 PM

  • Advice re best practice for managing the scan listener logs and list logs

    Hi friends,
    I've just started a job as a RAC dba administrator for some big 24*7 systems, I've never worked with clusterware and RAC.
    2 Space problems
    1) Very large listener_scan2.log in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/trace folder
    2) Heaps of log_nnn.xml files in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/alert folder (4Gb used up)
    Welcome advice on the best way to manage these in the short term (i.e. delete manually) and the recommended practice and safest way (adri maybe not sure how it works with scan listeners)
    Welcome advice and commands that could be used to safely clean these up and put a robust mechanism in place for logfile management in RAC and CLusterware systems.
    Finally should I be checking the log files in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/alert regulalrly ?
    My experience with listener logs is that they are only looked at when there are major connectivity issues and on the whole are ignored.
    Thanks for your help,
    Cheers, Rob

    Have you had any issues that require them for investigative purposes? If not, just remove them. Are the logs required for some sort of audit process? If yes, gzip them to a location where you can use your OS tape backup policies to retain them for n-days. Once you remove an active file, it should recreate the file and continue without interruption.

  • Any "best practices" for managing a 1.3TB iPhoto library?

    Does anyone have any "best practices" or suggestions for managing and dealing with a large iPhoto library?  I currently have a 1.3 TB library.  This is made up of anything shot in the past 8 years culminating with the past 2 years being 5D Mark II images.  This also includes a big dose of 1080P video shot with the camera.  These are only our family photos so I would hate to break up the library into years as that would really hurt a lot of iPhotos "power".
    It runs fine in day to day use, but I recently tried to upgrade to iPhoto 11 and it crashes repeatedly during the upgrading library process.
    (I have backups, so no worries there.)
    I just know with Lion and iPhoto 9 being a bit old my upgrade day is coming and I'm not sure what my path is.

    If you have both versions of iPhoto on your Mac then try the following; While running iPhoto 09 create a new, test library and import a few photos into it.  Then try to convert that test library with iPhoto 11.  If it converts OK then your big library is causing the problem.
    If that's the case you can try rebuilding your working library as follows:  make a temporary, backup copy of the library and try the following:
    launch iPhoto with the Command+Option keys held down and rebuild the library.
    select the options identified in the screenshot.
    Click to view full size
    Once rebuild try converting it to iPhoto 11.
    NOTE:  if you already have a backup copy of your library you don't need to make the temporary backup copy.
    OT

  • Best practice file management

    Hello everyone
    I'm hoping to receive some advice on the best way to store a large collection of music, pictures and video, whilst keeping my computer as empty as possible to maximise it's processing power for professional video editing in the future. The computer is for both personal and business use, and I share it with my partner who is a music fiend, but not computer savvy.
    I have recently purchased a new 27" iMac with the standard specs (3.1GHz Quad-Core Intel Core i5, 4GB memory, 1TB hard drive).  It is currently running Snow Leopard (v. 10.6.6), and I plan to update to Lion when I purchase a new broadband account.
    I also have:
    One external 1TB Western Digital drive, currently at 85% capacity, with music, videos and pictures
    One external 1TB Western Digital drive, currently empty
    A new 2TB time capsule which is not yet set up
    Apple TV, not yet set up
    A mobile USB modem and basic account, soon to be replaced by a fairly high speed broadband modem with a fairly large download cap
    So far, I have:
    created three user profiles - one administrator, and two users
    Set up iTunes so each user shares a single library and itunes media folder
    We would like to also digitise a large collection of CDs and records.
    I was thinking about using the time capsule not only for storage/back up, but also to create a wireless network, allowing it to be stored with my printer and the hard drives can be stored in different room to the computer, away from sight.  The computer is so very gorgeous on it's own after all.... I've been advised also to use it as a wireless router when we purchase our broadband account, so I'm assuming the modem should also be connected to the time capsule in the other room.
    Rather than droning on about what I think I should do, I wondered if one of you experts could advise me on the best way to set everything up? I'm not sure that it's the best idea to set it up so the computer is always having to find files wirelessly from the time capsule and connected drives...  Wouldn't that be slow?
    The advice I've read in the various forums has been rather confusing, so your advice would be really appreciated!!
    Cheers
    Fiona

    Hi Fiona,
    Like your question, I'm in same boat and new to iMAC all together and want to setup backup and sharing strategy via best practice right up front.  Did you get any response or any good best practice you ran across in your research you an share with me?  Thanks.

  • Best practice for managing a Windows 7 deployment with both 32-bit and 64-bit?

    What is the best practice for creating and organizing deployment shares in MDT for a Windows 7 deployment that has mostly 32-bit computers, but a few 64-bit computers as well? Is it better to create a single deployment share for Windows 7 and include both
    versions, or is it better to create two separate deployment shares? And what about 32-bit and 64-bit versions of applications?
    I'm currently leaning towards creating two separate deployment shares, just so that I don't have to keep typing (x86) and (x64) for every application I import, as well as making it easier when choosing applications in the Lite Touch installation. But I know
    each deployment share has the option to create both an x86 and x64 boot image, so that's why I am confused. 

    Supporting two task sequences is way easier than supporting two shares. Two shares means two boot media, or maintaining a method of directing the user to one or the other. Everything needs to be imported or configured twice. Not to mention doubling storage
    space. MDT is designed to have multiple task sequences, why wouldn't you use them?
    Supporting multiple task sequences can be a pain, but not bad once you get a system. Supporting app installs intelligently is a large part of that. We have one folder per app install, with a wrapper vbscript that handles OS detection. If there are separate
    binaries, they are placed in x86 and x64 subfolders. Everything runs from one folder via the same command, "cscript install.vbs". So, import once, assign once, and forget it. Its the same install package we use for Altiris, and we'll be using a Powershell
    version of it when we fully migrate to SCCM.
    Others handle x86 and x64 apps separately, and use the MDT app details to select what platform the app is meant for. I've done that, but we have a template for the vbscript wrapper and its a standard process, I believe its easier. YMMV.
    Once you get your apps into MDT, create bundles. Core build bundle, core deploy bundle, Laptop deploy bundle, etcetera. Now you don't have to assign twenty apps to both task sequences, just one bundle. When you replace one app in the bundle, all TS'es are
    updated automatically. Its kind of the same mentality as active directory. Users, groups and resources = apps, bundles and task sequences.
    If you have separate build and deploy shares in your lab, great. If not, separate your apps into build and deploy folders in your lab MDT share. Use a selection profile to upload only your deploy side to production. In fact I separate everything (except
    drivers) into Build and deploy folders on my lab server. Don't mix build and deploy, and don't mix Lab/QA and production. I also keep a "Retired" folder. When I replace an app, TS, OS, etcetera, I move it to the retired folder and append "RETIRED - " to the
    front of it  so I can instantly spot it if it happens to show up somewhere it shouldn't.
    To me, the biggest "weakness" of MDT is its flexibility. There's literally a dozen different ways to do everything, and there's no fences to keep you on the path. If you don't create some sort of organization for yourself, its very easy to get lost as things
    get complicated. Tossing everything into one giant bucket will have you pulling your hair out.

  • Best Practice for Managing Cookies in an Enterprise Environment

    We are upgrading to IE11 for our enterprise. One member of the team wants to set a group policy that will delete all cookies every time the user exits IE11.  We have some websites that users access that use cookies to track progress in training,
    but are deleted when the user closes the browser.  What is the business best practice regarding deleting all history, temp internet files and, especially cookies when closing a browser.
    If you can point me to a white paper on this topic, that would be helpful.
    Thanks
    Bill

    Hi,
    Regarding cookie settings, we could manage IE privacy settings using Administrative templates for IE 11:
    Administrative templates and Internet Explorer 11
    Delete and manage cookies
    The Administrative templates for IE 11, we could download from here:
    Administrative Templates for Internet Explorer 11
    Hope this may help
    Best regards
    Michael Shao
    TechNet Community Support

  • What is the best practice to manage versions in XI?

    Hi!
    Is there any <b>good</b> “best practice” ways to manage versions in XI.
    Have a challenging scenario with many legacy systems and many interfaces per legacy system.
    Should I put all the different interfaces for one legacy system under one namespace in the Integration Repository or should I make one own namespace for each interface.
    Or is there other approaches, that I should consider to get a environment that is “easily” maintained.
    br.samuli

    Hi,
    In our project we have defined our own naming conventions/namespaces.
    For instance, we have agreed that we will group all interfaces (from all offices worldwide and different projects) into one single product version.
    This global custom product will in turn be divided into different SWC (Software Components) and SWCV (Software Component versions).
    So for each business scenario we will create separate SWC's and when necessary create new versions of these SWC's.
    This means that each SWC should contain all the required objects to support a complete integration scenario i.e. inbound/outbound-interfaces, data types, msg types, business scenario's etc...
    Regards,
    Rob.

Maybe you are looking for