Collection for file structure

hi,
what is a good collection to store a file structure. the collection should support deleting/adding/changing nodes like a file structure does it.
thanx a lot.

ok, thanx.
so if i want to update the treemodel i need to have a filemonitor running, right? (i don't have any gui which informs the treemodel to update). so if somebody changes the filestructure i need to trigger it and inform the treemodel to update. does this make sense?

Similar Messages

  • The file structure online redo log, archived redo log and standby redo log

    I have read some Oracle documentation for file structure and settings in Data Guard environment. But I still have some doubts. What is the best file structure or settings in Oracle 10.2.0.4 on UNIX for a data guard environment with 4 primary databases and 4 physical standby databases. Based on Oracle documents, there are 3 redo logs. They are: online redo logs, archived redo logs and standby redo logs. The basic settings are:
    1. Online redo logs --- This redo log must be on Primary database and logical standby database. But it is not necessary to be on physical standby database because physical standby is not open. It doesn't generate redo log. However, if don't set up online redo log on physical standby, when primary failover and switch standby as primary. How can standby perform without online redo logs? In my standby databases, online redo logs have been set up.
    2. Archived redo logs --- It is obviously that primary database, logical and physical standby database all need to have this log file being set up. Primary use it to archive log files and ship to standby. Standby use it to receive data from archived log and apply to database.
    3. Standby redo logs --- In the document, it says A standby redo log is similar to an online redo log, except that a standby redo log is used to store redo data received from another database. A standby redo log is required if you want to implement: The maximum protection and maximum availability levels of data protection and Real-time apply as well as Cascaded destinations. So it seems that this standby redo log only should be set up on standby database, not on primary database. Am my understanding correct? Because I review current redo log settings on my environment, I have found that Standby redo log directory and files have been set up on both primary and standby databases. I would like to get more information and education from experts. What is the best setting or structure on primary and standby database?

    FZheng:
    Thanks for your input. It is clear that we need 3 type of redo logs on both databases. You answer my question.
    But I have another one. In oracle ducument, it says If you have configured a standby redo log on one or more standby databases in the configuration, ensure the size of the current standby redo log file on each standby database exactly matches the size of the current online redo log file on the primary database. It says: At log switch time, if there are no available standby redo log files that match the size of the new current online redo log file on the primary database. The primary database will shut down
    My current one data gurard envirnment setting is: On primary DB, online redo log group size is 512M and standby redo log group size is 500M. On the standby DB, online redo log group size is 500M and standby redo log group size is 750M.
    This was setup by someone I don't know. Is this setting OK? or I should change Standby Redo Log on standby DB to 512M to exactly meatch with redo log size on primary?
    Edited by: 853153 on Jun 22, 2011 9:42 AM

  • Reciever file adapter configuration for Deep structure

    Hi Experts,
                     I have a idoc to file scenario in which i used a data type for file in below format:
    DT_Test
    -->Recordset(0.unbounded)
    >E21DPU1(0.unbounded)
    >field1
    >field2
    >E21DPU5(0.unbounded)
    >filed 3
    >filed 4
    >E21DP03(0.unbounded)
    >filed 5
    >filed 6
    Here DT_test is datatype name,Recordset is a structure name which contain E21DPU1, E21DPU5,E21DPO3 stucture inside it.Now,E21DPU5 and E21DPO3 structures are under E21DPU1.
    I am confused in creating content conversion parameters i.e what we have to mention in Recordset Stucture .
    I used E21DPU1,,E21DPU5,,E21DP03,* .should it work for deep structure.
    Thanks
    Deepak

    Hi,
    file adapter does not handle 2 level deep structures
    the easiest way to do it now is to go for abap or java mapping
    and create a line for each of the output lines and handle this in the file adapter
    so like <line> </line>
    <line>E21DPU1(0.unbounded) with fields </line>
    <line>E21DPU5(0.unbounded) with fields </line>
    <line> etc. </line>
    Regards,
    Michal Krawczyk

  • How to create a simple File structure for a large project?

    Hi to all,
    I've own and operated my own website design/development (a 1 woman office, plus many sub-contractors) over the period of 8 years. I started hand-coding HTML sites in 1997, before the creation DW (though I think the first ver was for Mac in '97). Over the recent years I've udated my skills to include CSS and enough Java/PHP to customize and/or troubleshoot current projects (learn as I go).
    The majority of my clients have been other 1-10 person entrepreneur companies. I've recently won a bid to redesign a government site which consist of 30 departments, including their main site.
    The purpose of this thread is to get some ideas on creating a file management/structure. Creating file management setup for smaller companies was a piece of cake, using a simple file mgmt structure within DW. Their current file structure is all over the place. I've read about a very good, simple file struture in a DW CS4 manual and wanted to get feedback on different methods that have worked, and have not worked, or your client:
    Here's my thinking:
    1. within the root dir place home.htm and perhaps a few .htm related only to home
        2. create the following folders off the root, "docs, imgs/global, CSS, FLA, Departments"
                - sub folders within docs for each dept
                - site wide css's placed into CSS
                - site wide FLAs into the FLA
                - sub-folders created within 'imgs' for each dept, including a 'global folder' for sitewide images and menu imgs (if needed)
    - OR -
    1. create same file structure for each dept folder, such as 'imgs/CSS/FLA/Docs'
    Open for suggestions....
    Ciao

    It is a problem I have thought over at length and still feel what I use could be better. You are doing it the right way around researching before you start, as moving files once things are underway can course real problems. One issue is the use of similar assets across site(s), and version control if you have multiple versions of the same asset.
    Can not say I have built a site(s) of that size but would recommend putting together a flow chart to help visualise the structure and find out better ways organising (works for me). Good luck, post back with your solution.

  • I search TB for a word in a message. TB finds email containing word. Where does TB tell me the location of the email in my file structure?

    I have my emails organized into folders. I seem to have misfiled a message. TB finds the message when I search on a key word, but I don"t know where the message is located in my file structure. I would like to refile it correctly so I need to know its current location. The contents are present in the search returns but I cant see the path.
    Thanks
    Gary

    Please see this discussion for some ideas on the issue:
    https://support.mozilla.org/en-US/questions/993657

  • Import Manager Usage : Approaches for developing Import file structure and text validations

    Hi Experts,
    We are having 50+ import maps. We have provided option to users to drop the files for data import. Currently Import Manager(7.1 Sp08) does not have capability of pre-import validation of file data such as
    a. file structure - number of columns specific to import map
    b. file text validations - special characters, empty lines, empty cells
    c. Uniqueness of the records in the file
    For this, we are planning to build temporary folder(port specific) in which user drops in the file. We use the custom development to do above mentioned validations and then move the files to actual import ports.
    Please let us know if you have worked on similar requirements and how you have fulfilled such requirements.
    Regards,
    Ganga

    Hi Ganga,
    Assuming you have a well defined xsd and are getting valid xmls from source in the Inbound Port of MDM. Also,you have a Primary key in form of External ID (say).
    So just by making and defining a XSD you get most of what you want in your questions a and b.
    Now if you wish to use PI to drop files in the inbound port then you can build all the validations in PI itself and you would not need Staging table.
    Otherwise,you can have another table (preferably Main table) in the same repository or other dummy repository where records are created on import based on External ID.
    Here you can launch an MDM workflow on import of these records and run assignments to replace unwanted characters and Validations to give error for rejecting some records based on the data quality level desired.Once unwanted characters are removed and data is validated it can be syndicated using a syndication step in the Workflow.So records which fail are not sent and which pass are sent to a outbound port.
    From the outbound port PI or some other job can pick the file from this outbound folder and drop to Inbound folder of the same repository which imports to the required Primary Main table.Here again you have the option to leverage validations in PI and further check if data is fine.
    Once this activity is done you can delete the records from the staging table.
    Thanks,
    Ravi

  • File structure for Sets Import - FM_SETS_FIPEX1

    Hi Folks,
    I've been tasked with migrating Funds Group and Commitment Group records into SAP ECC6.  I've discovered the import option, but cannot get a file loaded, probably because the import file structure is incorrect.  Could someone please provide me with the Import file structure, and even an example of the file layout (i.e. what does 'Type' mean?, when to use id 'H', 'R' or 'X').  
    Points will be awarded for helpful solution.
    Cheers,
    Steve

    The trick is to set up a test Group set on-line, and use the Export option on the FM_SETS_FIPEX1 transaction (under the 'Extras' Menu), to extract an example of the file.  Then the import file can be created using the same file structure. NB:  The File is positional based, so saving as tab delimited or comma delimited won't work, as the '#' or ',' delimiters in the file will occupy critical positions in the upload file, causing an error during the load.

  • Looking for the Garbage Collection log files

    Hello,
    I am looking for the Garbage Collection log files which contain the GC events, like this:
    [GC 2095K->1709K(2160K), 0.0017628 secs]
    [Full GC 2161K->1018K(2276K), 0.0576353 secs]
    The server's GC is already configured accordingly:
    –verbose:gc
    I have examined the std_server<x>.out files at the work folder but can't see this info.
    My question is: Where would I find these files on the server?

    I can't find info in the format I know it not at dev_server* or std_server*.
    in std_server I see xml file like this:
    <gc type="scavenger" id="1" totalid="1" intervalms="0.000">
        <flipped objectcount="303111" bytes="24994248" />
        <tenured objectcount="0" bytes="0" />
        <refs_cleared soft="429" weak="4329" phantom="0" />
        <finalization objectsqueued="1711" />
        <scavenger tiltratio="50" />
        <nursery freebytes="497866672" totalbytes="524288000" percent="94" tenureage="10" />
        <tenured freebytes="1094921936" totalbytes="1098907648" percent="99" >
          <soa freebytes="1039977168" totalbytes="1043962880" percent="99" />
          <loa freebytes="54944768" totalbytes="54944768" percent="100" />
        </tenured>
        <time totalms="185.585" />
      </gc>
    I guess this is the info I need but it's not formatted in a way that your tool can read it and having a tool like this reading it is very helpful.
    Any ideas?

  • Collecting static files for django app

    Hi there,
    I'm trying desperately to run the command 'manage.py collectstatic' to generate static files for my django application as several items are missing in my project once deployed to azure.
    Where on earth do I type this?  I'm getting nothing but syntax errors.
    Somewhere out there...

    Hi Firkinfedup,
    Welcome to MSDN forum.
    Your issue is out of support range of VS General Question forum which mainly discusses
    the usage of Visual Studio IDE such as WPF & SL designer, Visual Studio Guidance Automation Toolkit, Developer Documentation and Help System
    and Visual Studio Editor.
    Because you issue is how to collect static files for django application with command, I suggest you to consult your issue at Django community:
    https://www.djangoproject.com/community/ for better support.
    Best regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • File Content Conversion for complex structure

    Hi
    I have a requirement to repeat a structure consisting of three lines i.e the segment  containing these 3 lines will have occurrence 0....unbounded and will be repeated in the File multiple times ,
    What is the level that can be handled in File Content Conversion's recordset structure ?
    My issue is that  the structure is
    Dt_File -> Repeating Level1->Level2 ->Field1
                                                ->Level2a->Field3
                                                -> Level2b->Field5
    where the first level is DT_File
    under which I have RepeatingLevel1 as a SubElement
    and under Repeating level I have Level2 , Level2a and Level2b at its subelementsl
    and under Level2 I have Field1 , Under Level2a i have Field3 and under Levle 2b i have Field5
    How do i handle this in the Content Conversion ?? as in how do I create my Recordset Structure ?? as it just handles one level below the Document Name (i,e) Message Type)
    Is it possible ??? or should i consider some other way to constuct my data type ??
    Thanks
    Dev

    Hi Tarang
    My DT according to the target file structure is this :
    DT_File
    >Main1(1,1)
    >Main2(1,1)
    >Main3(0...unbounded)
    >Record1(1,1)
    >Field1
    >Field2
    >Field3
    >Record2(1,1)
    >Field4
    >Field5
    >Record3(1,1)
    >Field6
    >Field7
    SO I want to confirm if the receiver FCC will be   -     Main1,Main2,Main3,Record1,Record2,Record3
    Record1.fieldSeparator  ,
    Record1.endSeparator     'nl'
    Record2.fieldSeparator  ,
    Record2.endSeparator     'nl'
    Record3.fieldSeparator  ,
    Record3.endSeparator     'nl'
    or                                                                   Main1,Main2,Record1,Record2,Record3
    Thanks
    Dev

  • File structure and hierarchy in Aperture.

    Hi, everyone.
    I am trying to import folders with images into Aperture and, much to my surprise, have found that this process is everyting but intuitive and straightforward.
    My library (outside of Aperture) is made up of folders each corresponding to a day and location. These folders are labeled using the data and location as part of their name.
    As I try to import these folders into Aperture I can't seem to import them into a single folder or project as sub-folders or sub-domains of that project. I have tried to create a project and them import the folders into it but Aperture won't import them into that project and doesn't allow me to drag and drop it either once it has been imported. I also tried to create albums and folders but haven't been successful.
    How does this file structure in Aperture work in terms of hierarchy ? It certainly isn't structured the way the Finder is or any other file system I have seen to date. Creating folders, organizing them and bringing any type of data into these folders should be a simple process but in Aperture it doesn't seem to be.
    Am I doing something wrong or is Aperture trying to re-invent the wheel ?
    Is there any tutorial I can watch or read on how to work with Aperture's file structure and import folders and folders into it ?
    Thank you in advance.

    Regarding your specific problem I suggest treating each existing dated folder of image files as a single Project in Aperture. In Aperture a Project is a specific time-based concept that may or may not jive with what you previously considered a project. For instance I may have in my mind that shooting all the highest peaks in every state is a "project," but that would be inappropriate as an Aperture Project. Instead each peak might be a Project, but the pix of all the peaks would be pointed to as an Album.
    The way I look at it conceptually:
    Aperture is a database (DB), and each image file lives in one Project.
    Albums are just collections of Pointers that point to individual image files living in one or more Projects. Since they just contain pointers, albums can be created or deleted at will without affecting image files. Very powerful. And Albums of pointers take up almost zero space, so they are fast and do not make the Library size grow.
    Keywords can be applied to every image separately or in batches. Keywords are hugely powerful and largely obviate the need for folders. Not that we should never use folders, just that we should use folders only when useful organizationally - - after first determining that using keywords and albums is not a better approach.
    As one example imagine the keyword "flowers."  Every image of a 100,000 images Library that has some flowers in it has the keyword flowers. Then say we want to put flowers in an ad, or as background for a show of some kind, or to print pix for a party, or even just to look for an image for some other reason. We can find every flower image in a 100k-image database in 2 seconds, and in another few seconds create an Album called "Flowers" that points to all of those individual images.
    Similarly all family pix can have a keyword "family" and all work pix can have a key word "work." Each individual pic may have any number of keywords. Such pic characteristics (work, family, flowers, etc.) should not be organized via folders.
    So by using keywords and albums we can have instant access to every image everywhere, very cool. And keywords and albums essentially take up no space in the database.
    Another approach is to use a folder "Family" for family pix, a folder "Flowers" for flowers pix and another folder "Work" for work pix. IMO such folders usage is a very poor approach to using an images database (probably stemming from old paper or film work practices). Note that one cannot put an image with family in a field of flowers at a work picnic in all three folders; but it is instant with keywords.
    HTH
    -Allen

  • Role Assignment Discovery Issue for Files and Folders through Sharepoint REST services

    To preface, I am a decided Sharepoint newbie in every sense. I am trying to use the Sharepoint REST services (Sharepoint 2013) to walk the folder and file structure of my Sharepoint server and, determine as I go, the Role Assignments (and subsequently
    Permissions) on those folders and files. I'm using an Administrator credentials and I'm actually able to successfully do it but I've run into some caveats. All the caveats begin with this; when I'm examining a folder, for example:
    /_api/Web/GetFolderByServerRelativeUrl('/sites/cmisdev/Development')/ListItemAllFields
    I receive either an empty list or an error response doc when following the link supplied for ListItemAllFields.  When following that kind of link for folders, I either get:
    <d:ListItemAllFields
    m:null="true"
    />
    or an error response document that says "The object specified does not belong to a list." When I hit the /ListItemAllFields endpoint for files, I receive a response with a link for Role Assignments which subsequently also works and I get the
    info I need. So, is this a bug? Why does the link returned from Sharepoint work for files and not folders? So, google, google, google, and I discover that there is another possible way to get at the Role Assignments (and that the object does, indeed, belong
    to a list!).
    If I know the Title (or the guid) of the folder in question, I can use the following endpoint:
    /_api/Web/Lists/GetByTitle('Development')
    If I use that endpoint, I get the information I would have expected to get from following /ListItemAllFields and the subsequent Role Assignments links all work and I get what I need. If there's a bug and this is how I have to work around it, that's fine
    but I have yet to discover how to dynamically determine the Title of a given folder nor am I sure if all Titles are supposed to be unique within a given Sharepoint server. I'm assuming that the folder name as represented in the server relative URL and the
    Title may be different and this is where my newbishness may start to shine if I'm misunderstanding what a "List" is supposed to be in Sharepoint. Anyway, I did find that I could use the Properties endpoint to perhaps get the Title, for example:
    /_api/Web/GetFolderByServerRelativeUrl('/sites/cmisdev/Development')/Properties
    gives me:
    <d:vti_x005f_listtitle>Development</d:vti_x005f_listtitle>
    whose value I assume I could then supply to the /GetByTitle endpoint and be golden. However, "vti_x005f_listtitle" just sounds a little too deep to be something I should be relying on but maybe that's kosher. That's part of what I'm trying to
    find out. Also, if there is a way to use the Sharepoint REST API to discover the guid of a given object, then I could look it up in that way.
    So, in summary:
    1. Am I going about getting folder Role Assignment information in the wrong way? Based on the CSOM examples I've seen, I believe I'm doing it correctly and that the answer to #2 below is a resounding "Yes!" :)
    2. Is it a bug if I'm not able to use /ListItemAllFields on folders using the server relative url?
    3. If I'm supposed to use GetByTitle as a workaround, am I discovering that Title correctly through /Properties? Seems quite circuitous and awkward. Are Titles required to be unique throughout a given Sharepoint server?
    4. If I'm supposed to use the guid, how can I use the REST interface to discover an object's guid? Once we get down to the Role Assignments and other links, the guid appears in those links but I don't know how to discover it independently if that's the
    path I should use to get the data I described above.

    Upon further research, I'll answer my own question for the benefit of some other potential future newbie.  The answer to question number 1 above is "Not exactly.".  The server relative URLs I was using corresponded to lists (which are
    returned as a collection through /_api/web/lists).  I was treating them mentally like regular folders.  That, coupled with the fact that accessing their data as I showed above returns a ListItemAllFields link, made me think that was the way to get
    the Role Assignments just as I would for files and, as it turns out, "real" folders and sub-folders created under these lists.  That was the other problem with thinking of these lists as regular folders.  So, ListItemAllFields works on
    all files and folders in a list.  However, if you want Role Assignments for the lists themselves, you can keep track of the Titles and\or Guids from the /_api/web/lists that you're interested in (in my case, all non-hidden "document library"
    type lists) and then access those Role Assignments as I discussed in questions 3 and 4 above.  For example, from the /_api/web/lists collection from my test server, the "Development" document library Role Assignments are accessable via /_api/Web/Lists(guid'cd242eeb-aafa-4efa-aecc-9bbdf8e3d459')/RoleAssignments
    or /_api/Web/Lists/GetByTitle('Development')/RoleAssignments.

  • Backend File Structure

    Could you please tell me if there is a way in Berkeley db Java Edition (in DPL APIs if possible) to choose which file structure to use for my application in the back-end (i.e. B+tree, hash table, Record number). Or, Is it done automatically by the system? (i.e. Btree for main file and hash tables for collection in the main file and record number if Sequence is used as primary key).
    Also, is the Btree that is used is a B+tree?
    Thank you,
    Samir

    Hi Samir,
    I think you have the two products confused. BDB (C-based product) has multiple access methods (BTREE, RECNO, etc). BDB-JE does not, it only supports BTREE and there is no configuration option.
    For BDB (and of course BDB-JE), the DPL only supports use of the BTREE access method.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Retaining existing file structure when importing into Iphoto 8

    Hello,
    I'm new to the list and to iphoto. I'm an historian, and I have about 10,000 images of documents on my mac, all divided into dozens of folders, sub-folders, and sub-sub folders, depending on where the documents originated in the collections I was researching. I'd like to be able to work with these images in Iphoto 8. When I tried to import a folder that contained photos in sub-folders, Iphoto simply imported the photos without the files, making them pretty much useless for me. Can anyone on the list tell me how to work with these images with Iphoto without losing the existing file structure, and without me needing to recreate that file structure within Iphoto? Barring that, I'd be obliged for suggestions of other programs that can do this. Thank you!

    Welcome to the Apple Discussions. If you must maintain your folder structure then you might want to look at another application for that.Media Expression will do exactly what you want. You setup your folder system and it works with it. You can keyword, set up categories (albums in iPhotoese) and lots more. It's one of the better DAM (digital asset management) applications.
    Otherwise you can simulate your folder structure in iPhoto with iPhoto's virtual folders and albums like shown in this screenshot:
    Click to view full size

  • Hi, I'm new to Mac and need my photo file structure to resemble what it was on my PC

    I uploaded all my new photos from my PC, which then fed into iphoto.
    However when I look into the iphoto library they have automatically created folder and file names, rather than what they were names in my PC. In the iphoto ap they appear fine there.
    How do I get the library structure to resemble the folder names given in the iphoto ap or on my old pc?
    It is the latest operating structure and iphoto ap running.

    Just tp highlight a couple of major issues with a referenced library
    Importing: Copy items to the iPhoto Library
    This preference is turned on by default. When you import photos from a hard disk, iPhoto copies(duplicates) the photos to the iPhoto library and uses the copies. The original photos remain untouched in their current locations.
    If you have an extensive collection of photos on your computer and you want to keep your current file organization, you can have iPhoto point to your original photo files instead of importing them into your iPhoto library. To have iPhoto access your photos from their current locations on your hard disk (rather than making copies of them), deselect this checkbox. Note that if you edit one of these images in iPhoto, the edited version is saved in the iPhoto library and the original file remains untouched.
    If you turn off this preference, be sure not to move the photos from the location from which they were imported. If you move the files, your iPhoto library won’t work correctly.
    So you still have not reporduced the PC structure since you only have that structure for the original photos - no edits appear in that structure
    Importing is more compicated
    Deleting ai more complicated since you have to delete form iPhoto nand then from yoru own managed file structure
    and most critically you can not more the photos - so replacing defective hardware or upgrading to a larger hard drive or to a new computer is not possible
    Plus there is no gain from a referenced library - so I suggest you do not recommend it since it does not work for hardly anyone
    LN

Maybe you are looking for