Flarecreate for zfs root dataset and ignore multiple dataset

Hi All,
I want to write a script to create flar images on multiple servers. In non zfs filesystem I am using -X option to refer a file to exclude mounts on different servers.
but on ZFS -X option is not working. I want multiple mounts to be ignore on ZFS base system during flarecreate.
I can use -D option to ignore the datasets on the server but it is not serving my purpose as i am maintaining a common file to ignore the mounts on all different servers.
Please help me in this

Renaming the root pool is not recommended.

Similar Messages

  • Live upgrade only for zfs root?

    Only live upgrade for zfs root on 5/09? Is this true? I have tried to do live upgrades previously and have had no luck. Particularly on my old blade1000 with an 18gb drive.

    Reading over this post I see it is a little unclear. I am trying to upgrade a u6 installation that has a zfs root to u7.

  • Solaris 10 with zfs root install and VMWare-How to grow disk?

    I have a Solaris 10 instance installed on an ESX host. During the install, I selected a 20gig disk. Now, I would like to grow the disk from 20GB to 25GB, I made the change on VMWare but now the issue seems to be Solaris. I haven't seen anything on how to grow the FS in Solaris. Someone mentioned using fdisk to manually change the number of cylinders but that seems awkward. I am using a zfs root install too.
    bash-3.00# fdisk /dev/rdsk/c1t0d0s0
    Total disk size is 3263 cylinders
    Cylinder size is 16065 (512 byte) blocks
    Cylinders
    Partition Status Type Start End Length %
    ========= ====== ============ ===== === ====== ===
    1 Active Solaris2 1 2609 2609 80
    This shows the expanded number of cylinders. but a format command does not.
    bash-3.00# format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c1t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
    /pci@0,0/pci1000,30@10/sd@0,0
    Specify disk (enter its number):
    Any ideas?
    Thanks.

    That's the MBR label on the disk. That's easy to modify with fdisk.
    Inside the Solaris partition is another (VTOC) label. That one is harder to modify. It's what you see when you run 'format' -> 'print' -> 'partition' or 'prtvtoc'.
    To resize it, the only method I'm aware of is to record the slices somewhere, then destroy the label or run 'format -e' and create a new label for the autodetect device. Once you have the new label in place, you can recreate the old slices. All the data on the disk should be stable.
    Then you can make use of the space on the disk for new slices, for enlarging the last slice, or if you have a VM of some sort managing the disk.
    Darren

  • Smpatch add -b fails for ZFS root

    This is Solaris-10U6, x86, patched to current patchlevels as of this afternoon:
    # lustatus
    Boot Environment           Is       Active Active    Can    Copy
    Name                       Complete Now    On Reboot Delete Status
    be20081218                 yes      no     no        yes    -
    be20081229                 yes      yes    yes       no     -
    # smpatch analyze 2>&1 | tee /var/tmp/,patchlist
    122912-14 SunOS 5.10_x86: Apache 1.3 Patch
    # lucreate -n testbe
    Checking GRUB menu...
    System has findroot enabled GRUB
    Analyzing system configuration.
    Comparing source boot environment <be20081229> file systems with the file
    system(s) you specified for the new boot environment. Determining which
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <testbe>.
    Source boot environment is <be20081229>.
    Creating boot environment <testbe>.
    Cloning file systems from boot environment <be20081229> to create boot environment <testbe>.
    Creating snapshot for <rpool/ROOT/be20081229> on <rpool/ROOT/be20081229@testbe>.Creating clone for <rpool/ROOT/be20081229@testbe> on <rpool/ROOT/testbe>.
    Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/testbe>.
    Saving existing file </boot/grub/menu.lst> in top level dataset for BE <be20081218> as <mount-point>//boot/grub/menu.lst.prev.
    Saving existing file </boot/grub/menu.lst> in top level dataset for BE <testbe> as <mount-point>//boot/grub/menu.lst.prev.
    File </boot/grub/menu.lst> propagation successful
    Copied GRUB menu from PBE to ABE
    No entry for BE <testbe> in GRUB menu
    Population of boot environment <testbe> successful.
    Creation of boot environment <testbe> successful.
    # smpatch download -x idlist=/var/tmp/,patchlist
    122912-14 has been validated.
    # smpatch add -b testbe -x idlist=/var/tmp/,patchlist
    Ckecking the currently running boot enviornment ...
    Currently running boot enviornment name is [be20081229].
    Checking the destination boot environment [testbe] ...
    Copying the cuurently running BE into inactive BE [testbe] ...
    This grogess will take you a long time, please wait a moment.)
    ERROR: File systems on ABE <testbe> have insufficient space for repopulation from boot environment <be20081229>. It is recommended to delete this BE and create a fresh BE.
    /usr/sbin/lumake: lumake into testbe failed
    #

    I think the problem is generated by the following line in /usr/sbin/lumake:
    $LUBIN/lucomp_size -p $PBE_NAME -i ${ICF} -O $INODE_ICF -n $ABE_NAMEThis command calls /usr/lib/lu/lucomp_size and it is this which is returning a 1, and ultimately causes the "insufficient space" error.
    To find out what is going wrong when lumake checks the size, please enable the printing of commands and arguments and run the command again:
    # script /var/tmp/smpatch.out
    # set -x
    # smpatch add -b testbe -x idlist=/var/tmp/,patchlist
    # set +x
    # exitOnce this has completed, please look through the /var/tmp/smpatch.out and then run the corresponding lucomp_size command and check the output and exit code:
    # /usr/lib/lu/lucomp_size <args>
    # echo $?If you are absolutely certain that there is enough space and just want to make this work now, you could remove the following lines from lumake:
      $LUBIN/lucomp_size -p $PBE_NAME -i ${ICF} -O $INODE_ICF -n $ABE_NAME
      if [ "$?" -ne "0" ] ; then
        # Size is not sufficient
        ${LUPRINTF} -Eelp2 "`gettext 'File systems on ABE <%s> have insufficient \
    space for repopulation from boot environment <%s>. It is recommended to \
    delete this BE and create a fresh BE.'`" "${ABE_NAME}" "${PBE_NAME}"
        err_exit_script 1
      fiI would recommend that you open a support call so that the issue can be progressed more quickly.

  • HOWTO:  Changing the URL and reloading the dataset

    Sometimes, you will need to change the url of your dataset
    and reload the dataset. This is quite easy to do with changing a
    variable and calling a method. Below is the code you would use. I
    use it to page through a set of data which only comes back in 10
    row increments and we do not know the recordcount of. (Yippee,
    Oracle E1). Anyway, here's the code....
    <script type="text/javascript" language="javascript">
    rowCount = 0;
    dsPeople = new
    Spry.Data.XMLDataSet("/spry/?event=peopleSearch&srchRowStart="
    + rowCount,"/orders/row", { useCache: false });
    function changeRowCount()
    rowCount = rowCount+10;
    var spryURL = "/spry/?event=peopleSearch&srchRowStart="
    + rowCount;
    dsPeople.url = spryURL;
    dsPeople.loadData();
    </script>
    <div spry:region="dsPeople">
    <div spry:state="loading"><img
    src="/assets/images/ajax-loader.gif"/></div>
    <div spry:state="error">Error Loading
    Data...</div>
    <div spry:state="ready">
    <div spry:repeat="dsPeople">
    <span spry:content="{dsPeople::NAME}"></span>
    </div>
    <a href="javascript:changeRowCount()"><span
    spry:content="Next 10 Records"></span></b>
    </div>
    </div>

    Enabling SSL for Central Administration is a good idea. Making it so it is only accessible using an IP address doesn't make it any more secure. This is security through obscurity and anyone dedicated enough to attacking Central Administration will find the
    site whether it's an IP address or named.
    For what it's worth an attacker is going to try scanning IP ranges long before they try looking at DNS. And, because of the way SharePoint works if the site is accessible by its IP address and not a specific hostname, anyone who knows the IP address and
    the SSL port (443) can connect. If you have a hostname it won't be immediately accessible.
    Some other thoughts: When you rely just on the IP address, what happens if you want to move Central Administration to another server in the farm, or you want to provide load balancing and have multiple servers hosting Central Administration? What happens
    when the server running Central Administration dies so you create a new Central Administration site on a server with a different IP address? How will you communicate this URL change to all of your administrators?
    Jason Warren
    @jaspnwarren
    jasonwarren.ca
    habaneroconsulting.com/Insights
    Sorry,
    I wasn't clear in my OP.  The IP would be tied to a DN.  So you would go https://abc123.com and it would ask for authentication.  That URL would be tied to a separate IP on the network card of that server.  The IP address association
    is done through IIS.

  • How can I use the "Correct camera distortion" filter and process multiple files in PSE 11?

    How can I use the "Correct camera distortion" filter and process multiple files in PSE 11?

    Did you check the help page for Correct Camera Distortion and Process multiple file
    Correct Camera Distortion: http://helpx.adobe.com/photoshop-elements/using/retouching-correcting.html#main-pars_headi ng_5
    Process multiple files: http://help.adobe.com/en_US/photoshopelements/using/WS287f927bd30d4b1f89cffc612e28adab65-7 fff.html#WS287f927bd30d4b1f89cffc612e28adab65-7ff6

  • Dynamically generating the ssrs dataset and filling the data into the dataset and binding it to ssrs report dynamically

    I have a work to do, in ssrs we are using server reports in our project. i am looking for dynamically generating the ssrs dataset and filling the data into the dataset and binding the dataset to ssrs report(RDL) dynamically.
    Getting the dataset dynamically has a solution by using Report Definition Customization Extension (RDCE), but the problem is binding that dataset to the report(RDL) dynamically was not there.
    Here is the reference for RDCE http://www.codeproject.com/Articles/355461/Dynamically-Pointing-to-Shared-Data-Sources-on-SQL#6
    I looked for binding the dataset to the report(RDL) dynamically and searched many sites but i did not get the solution. Can anyone help me here.
    Is there any custom assemblies or any Custom data processing extensions to work around. Please help.
    Thanks in advance

    Hi Prabha2233,
    Thank you for your question.
    I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated.
    Thank you for your understanding and support.
    Regards,
    Vicky Liu
    Vicky Liu
    TechNet Community Support

  • Change ZFS root dataset name for root file system

    Hi all
    A quick one.
    I accepted the default ZFS root dataset name for the root file system during Solaris 10 installation.
    Can I change it to another name afterward without reinstalling the OS? For example,
    zfs rename rpool/ROOT/s10s_u6wos_07b rpool/ROOT/`hostname`
    zfs rename rpool/ROOT/s10s_u6wos_07b/var rpool/ROOT/`hostname`/var
    Thank you.

    Renaming the root pool is not recommended.

  • ZFS root and Live upgrade

    Is it possible to create /var as its own ZFS dataset when using liveupgrade? With ufs, there's the -m option to lucreate. It seems like any liveupgrade to a ZFS root results in just the root, dump, and swap datasets for the boot environment.
    merill

    Hey man.
    I bad my head against the wall with the same question :-)
    One thing that might help you out anyway is that i found a solution on how to move ufs filesystems to the new ZFS pool.
    Let's say you have a ufs fs with let's say application server and stuff on /app which is on c1t0d0s6.
    When you create the new ZFS based BE the /app is shared.
    In order to move it to the new BE, all you need to do is to comment the lines in /etc/vfstab you want to be moved.
    then run lucreate to create the ZFS BE.
    After that, create a new dataset for /app, just give it a different mountpoint.
    Copy all your stuff.
    rename the original /app
    and set the dataset's mountpoint
    this is it, all your stuff are now on ZFS.
    Hope it will be usefull,

  • Ldmp2v  and ZFS  root source system question

    hi
    reading the ldmp2v doc, it seems to implay that p2v only support source system with UFS root
    this is fine for s8 and s9 system.
    what about the new s10 with zfs root?
    thx

    Chk the links
    Transfer global settings - Multiple source systems
    Re: Difference between Transfer Global Setting & Transfer Exchange rates
    Regards,
    B

  • Question about using ZFS root pool for whole disk?

    I played around with the newest version of Solaris 10/08 over my vacation by loading it onto a V210 with dual 72GB drives. I used the ZFS root partition configuration and all seem to go well.
    After I was done, I wondered if using the whole disk for the zpool was a good idea or not. I did some looking around but I didn't see anything that suggested that was good or bad.
    I already know about some the flash archive issues and will be playing with those shortly, but I am curious how others are setting up their root ZFS pools.
    Would it be smarter to setup say a 9gb partition on both drives so that the root ZFS is created on that to make the zpool and then mirror it to the other drive and then create another ZFS pool from the remaining disk?

    route1 wrote:
    Just a word of caution when using ZFS as your boot disk. There are tons of bugs in ZFS boot that can make the system un-bootable and un-recoverable.Can you expand upon that statement with supporting evidence (BugIDs and such)? I have a number of local zones (sparse and full) on three Sol10u6 SPARC machines and they've been booting fine. I am having problems LiveUpgrading (lucreate) that I'm scratching my head to resolve. But I haven't had any ZFS boot/root corruption.

  • Creating a Month dropdown list for a report with multiple datasets

    I am currently working on a report that contains many charts that rely on multiple datasets. Currently, I have parameters @StartDate and @EndDate to display the information of the previous month across all the charts as my default value. Now, I would like
    to create a dropdown list so that the user can pick any month, and have the report display the data across all the charts for that particular month, while keeping the display of the previous month as the default value. Can anyone help as to how I can approach
    this. Would I have to use an expression when I specify my available values, or would I have to create a dataset to get the value of all 12 months? Any help would be greatly appreciated.

    This is essentially where I am having some trouble, as in creating the dataset that returns the individual months of the year as I am relatively to the use of SSRS. Could you possibly elaborate on how I can do so? Thank you in advance!
    Create a dataset in SSRS
    Inside dataset properties map datasource and then inside give command like
    DECLARE @Month int = 5 --example value
    SELECT ...
    FROM Table
    WHERE datefield >= DATEADD(mm,DATEDIFF(yy,0,GETDATE())*12 +(@Month-2),0)
    AND datefield < DATEADD(mm,DATEDIFF(yy,0,GETDATE())*12 +(@Month-1),0)
    then on refresh it will take field information and parameter information by itself.
    Please Mark This As Answer if it solved your issue
    Please Mark This As Helpful if it helps to solve your issue
    Visakh
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • Oracle Coherence*Web and BlazeDS: Multiple FlexSessions created for the same HttpSession

    Hi all,
    I have searched this forum and found a lot of good information from Alex Glosband and others about the infamous "Detected duplicate HTTP - based FlexSessions, generally due to the remote host disabling session cookies. Session coolkies must be enable to manage the client connection correctly." message.
    It seems, however, none of the cases are identical to ours. This is ours:
    - Resin 3.1.9
    - Oracle Coherence 3.7.1 with Coherence*Web (session replication)
    With this setup we get the "Detected duplicate HTTP..." message on the first attempt to use BlazeDS and on every subsequent call.  The same client and server code works fine in a local sessions setup.  With Coherence 3.3 (currently our production environment) it seems to occur less frequently, but still as frequent as it is a major issue for us.  It fails even with a single node using in-process distributed caching in our test setup (as well as with multi node out of process caching in our staging environment, for Coherence knowledgeable the resin app server runs with tangosol.coherence.session.localstorage=true in the first case and false in the second).
    Both the listener and message broker are mapped as "Coherence aware" in web.xml[1] so that they should use clustered sessions.
    We have been digging a bit and we found out that if we commented out lines 427 and 434 of flex.messaging.endpoints.BaseHTTPEndpoint from version 4.0.0.14931 it seems to mask the bug.  We added some logging in the setupFlexClient method and it seems that we get more or less a new FlexSession for each and every call - but they have the same cookie and thus underlying HttpSession. I.e. the list returned from flexClient.getFlexSessions() keeps growing. Thus we are not so keen on going to production with that memory leak and the above mentioned ugly hack of commenting out the detection of duplicates.
    We use request scope for the remote object, but could in theory use any scope as we do not really have any state on the object itself, it is all HttpSession state and return values that are key (logon is performed prior to doing the first blaze call, in pure forms and ajax, and it is not a timing issue in that regard we are seeing).
    Hope someone can shed some light on what can be happening. Is there any "reference testing"[2] or something when the FlexSessions are created that makes them being created as new? Where are they created?  We do not know the inner workings of the BlazeDS source, we just watched the call trace of the unwanted invalidation and found that to be line 427 of flex.messaging.endpoints.BaseHTTPEndpoint.
    Can we disable FlexSessions?  Since the flex and plain html parts of the app share the sessions, we always use FlexContext.getHttpRequest().getSession() anyway, never storing any state directly in the FlexSession or on the remote object. Or maybe there is a config option to help us with this detection (or creation) of multiple FlexSessions?
    Cheers and TIA,
    -S-
    [1] - For instance, this i the message broker servlet def:
    <servlet>
    <servlet-name>MessageBrokerServlet</servlet-name>
      <display-name>MessageBrokerServlet</display-name>
    <servlet-class>com.tangosol.coherence.servlet.api22.ServletWrapper</servlet-class>       
    <init-param>
    <param-name>coherence-servlet-class</param-name>
    <param-value>flex.messaging.MessageBrokerServlet</param-value>
    </init-param>       
    <init-param>
    <param-name>services.configuration.file</param-name>
    <param-value>/WEB-INF/flex/services-config.xml</param-value>
    </init-param>
    <load-on-startup>1</load-on-startup>
    </servlet>
    [2] - As you undertstand this is speculation based on pure air, but it could be that in Coherence there was a serialization/deserialization happening somehow that would break such a test?

    Just a quick update, it seems things are running in a stable fashion (and without visible memory leaks, just keeping the latest FlexSession) with these changes in BaseHTTPEndpoint:
         * Overrides to guard against duplicate HTTP-based sessions for the same FlexClient which will occur if the remote host has disabled session
         * cookies.
         * @see AbstractEndpoint#setupFlexClient(String)
        @Override
        public FlexClient setupFlexClient(String id) {
            log.debug("setupFlexClient start id " + id);
            FlexClient flexClient = super.setupFlexClient(id);
            // Scan for duplicate HTTP-sessions and if found, invalidate them and throw a MessageException.
            // A request attribute is used to deal with batched AMF messages that arrive in a single request by trigger multiple passes through this
            // method.
            boolean duplicateSessionDetected = (FlexContext.getHttpRequest().getAttribute(REQUEST_ATTR_DUPLICATE_SESSION_FLAG) != null);
            if (!duplicateSessionDetected) {
                List<FlexSession> sessions = flexClient.getFlexSessions();
                log.debug("Client has " + sessions.size() + " sessions.");
                int n = sessions.size();
                if (n > 1) {
                    int count = 0;
                    for (int i = 0; i < n; i++) {
                        if (sessions.get(i) instanceof HttpFlexSession)
                            count++;
                        if (count > 1) {
                            FlexContext.getHttpRequest().setAttribute(REQUEST_ATTR_DUPLICATE_SESSION_FLAG, Boolean.TRUE);
                            duplicateSessionDetected = true;
                            break;
            // If more than one was found, remote host isn't using session cookies. Kill all duplicate sessions and return an error.
            // Simplest to just re-scan the list given that it will be very short, but use an iterator for concurrent modification.
            int i = 0;
            if (duplicateSessionDetected) {
                List<FlexSession> sessions = flexClient.getFlexSessions();
                log.debug("Detected sessions from client: " + sessions);
                for (FlexSession session : sessions) {
                    if (session instanceof HttpFlexSession && i < sessions.size()) {
    //                    log.debug("----> sessionId: " + session.getId());
    //                    Enumeration e1 = session.getAttributeNames();
    //                    while (e1.hasMoreElements()) {
    //                        Object key = e1.nextElement();
    //                        log.debug("--------->" + key + "--------->" + session.getAttribute((String) key));
    //                    session.invalidate();
                        flexClient.sessionDestroyed(session);
                    i++;
                // Return an error to the client.
    //            DuplicateSessionException e = new DuplicateSessionException();
    //            e.setMessage(ERR_MSG_DUPLICATE_SESSIONS_DETECTED);
    //            throw e;
            return flexClient;
    It is not exactly beautiful (to say the least), but if it does the trick I might just be pragmatic enough to go with it... NB: I am of course not proposing this as a patch to this file or anything, it is just an ugly hack for our specific case, but maybe the information can help the BlazeDS team find the root cause making it incompatible with Coherence*Web.
    Will give it a test run on our staging servers.

  • Cloning a ZFS rooted zone does a copy rather than snapshot and clone?

    Solaris 10 05/08 and 10/08 on SPARC
    When I clone an existing zone that is stored on a ZFS filesystem the system creates a copy rather than take a ZFS snapshot and clone as the documentation suggests;
    Using ZFS to Clone Non-Global Zones and Other Enhancements
    Solaris 10 6/06 Release: When the source zonepath and the target zonepath both reside on ZFS and are in the same pool,
    zoneadm clone now automatically uses the ZFS clone feature to clone a zone. This enhancement means that zoneadm
    clone will take a ZFS snapshot of the source zonepath and set up the target zonepathCurrently I have a ZFS root pool for the global zone, the boot environment is s10u6;
    rpool 10.4G 56.5G 94K /rpool
    rpool/ROOT 7.39G 56.5G 18K legacy
    rpool/ROOT/s10u6 7.39G 56.5G 6.57G /
    rpool/ROOT/s10u6/zones 844M 56.5G 27K /zones
    rpool/ROOT/s10u6/zones/moetutil 844M 56.5G 844M /zones/moetutil
    My first zone is called moetutil and is up and running. I create a new zone ready to clone the original one;
    -bash-3.00# zonecfg -z newzone 'create; set autoboot=true; set zonepath=/zones/newzone; add net; set address=192.168.0.10; set physical=ce0; end; verify; commit; exit'
    -bash-3.00# zoneadm list -vc
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    - moetutil installed /zones/moetutil native shared
    - newzone configured /zones/newzone native shared
    Now I clone it;
    -bash-3.00# zoneadm -z newzone clone moetutil
    Cloning zonepath /zones/moetutil...
    I'm expecting to see;
    -bash-3.00# zoneadm -z newzone clone moetutil
    Cloning snapshot rpool/ROOT/s10u6/zones/moetutil@SUNWzone1
    Instead of copying, a ZFS clone has been created for this zone.
    What am I missing?
    Thanks
    Mark

    Hi Mark,
    Sorry, I don't have an answer but I'm seeing the exact same behavior - also with S10u6. Please let me know if you get an answer.
    Thanks!
    Dave

  • How can i get also the files in the root directory and how can i for testing add items of IEnumerable FTPListDetail to List string ?

    What i get is only the directories and files that in other nodes. But i have also files on the root directory and i never
    get them. This is a screenshot of my program after i got the content of my ftp. I'm using treeView to display my ftp content:
    You can see two directories from the root but no files on the root it self. And in my ftp server host i have files in the root direcory.
    This is the method i'm using to get the directory listing:
    public IEnumerable<FTPListDetail> GetDirectoryListing(string rootUri)
    var CurrentRemoteDirectory = rootUri;
    var result = new StringBuilder();
    var request = GetWebRequest(WebRequestMethods.Ftp.ListDirectoryDetails, CurrentRemoteDirectory);
    using (var response = request.GetResponse())
    using (var reader = new StreamReader(response.GetResponseStream()))
    string line = reader.ReadLine();
    while (line != null)
    result.Append(line);
    result.Append("\n");
    line = reader.ReadLine();
    if (string.IsNullOrEmpty(result.ToString()))
    return new List<FTPListDetail>();
    result.Remove(result.ToString().LastIndexOf("\n"), 1);
    var results = result.ToString().Split('\n');
    string regex =
    @"^" + //# Start of line
    @"(?<dir>[\-ld])" + //# File size
    @"(?<permission>[\-rwx]{9})" + //# Whitespace \n
    @"\s+" + //# Whitespace \n
    @"(?<filecode>\d+)" +
    @"\s+" + //# Whitespace \n
    @"(?<owner>\w+)" +
    @"\s+" + //# Whitespace \n
    @"(?<group>\w+)" +
    @"\s+" + //# Whitespace \n
    @"(?<size>\d+)" +
    @"\s+" + //# Whitespace \n
    @"(?<month>\w{3})" + //# Month (3 letters) \n
    @"\s+" + //# Whitespace \n
    @"(?<day>\d{1,2})" + //# Day (1 or 2 digits) \n
    @"\s+" + //# Whitespace \n
    @"(?<timeyear>[\d:]{4,5})" + //# Time or year \n
    @"\s+" + //# Whitespace \n
    @"(?<filename>(.*))" + //# Filename \n
    @"$"; //# End of line
    var myresult = new List<FTPListDetail>();
    foreach (var parsed in results)
    var split = new Regex(regex)
    .Match(parsed);
    var dir = split.Groups["dir"].ToString();
    var permission = split.Groups["permission"].ToString();
    var filecode = split.Groups["filecode"].ToString();
    var owner = split.Groups["owner"].ToString();
    var group = split.Groups["group"].ToString();
    var filename = split.Groups["filename"].ToString();
    var size = split.Groups["size"].Length;
    myresult.Add(new FTPListDetail()
    Dir = dir,
    Filecode = filecode,
    Group = group,
    FullPath = CurrentRemoteDirectory + "/" + filename,
    Name = filename,
    Owner = owner,
    Permission = permission,
    return myresult;
    And then this method to loop over and listing :
    private int total_dirs;
    private int searched_until_now_dirs;
    private int max_percentage;
    private TreeNode directories_real_time;
    private string SummaryText;
    private TreeNode CreateDirectoryNode(string path, string name , int recursive_levl )
    var directoryNode = new TreeNode(name);
    var directoryListing = GetDirectoryListing(path);
    var directories = directoryListing.Where(d => d.IsDirectory);
    var files = directoryListing.Where(d => !d.IsDirectory);
    total_dirs += directories.Count<FTPListDetail>();
    searched_until_now_dirs++;
    int percentage = 0;
    foreach (var dir in directories)
    directoryNode.Nodes.Add(CreateDirectoryNode(dir.FullPath, dir.Name, recursive_levl+1));
    if (recursive_levl == 1)
    TreeNode temp_tn = (TreeNode)directoryNode.Clone();
    this.BeginInvoke(new MethodInvoker( delegate
    UpdateList(temp_tn);
    percentage = (searched_until_now_dirs * 100) / total_dirs;
    if (percentage > max_percentage)
    SummaryText = String.Format("Searched dirs {0} / Total dirs {1}", searched_until_now_dirs, total_dirs);
    max_percentage = percentage;
    backgroundWorker1.ReportProgress(percentage, SummaryText);
    percentage = (searched_until_now_dirs * 100) / total_dirs;
    if (percentage > max_percentage)
    SummaryText = String.Format("Searched dirs {0} / Total dirs {1}", searched_until_now_dirs, total_dirs);
    max_percentage = percentage;
    backgroundWorker1.ReportProgress(percentage, SummaryText);
    foreach (var file in files)
    TreeNode file_tree_node = new TreeNode(file.Name);
    file_tree_node.Tag = "file" ;
    directoryNode.Nodes.Add(file_tree_node);
    numberOfFiles.Add(file.FullPath);
    return directoryNode;
    Then updating the treeView:
    DateTime last_update;
    private void UpdateList(TreeNode tn_rt)
    TimeSpan ts = DateTime.Now - last_update;
    if (ts.TotalMilliseconds > 200)
    last_update = DateTime.Now;
    treeViewMS1.BeginUpdate();
    treeViewMS1.Nodes.Clear();
    treeViewMS1.Nodes.Add(tn_rt);
    ExpandToLevel(treeViewMS1.Nodes, 1);
    treeViewMS1.EndUpdate();
    And inside a backgroundworker do work how i'm using it:
    var root = Convert.ToString(e.Argument);
    var dirNode = CreateDirectoryNode(root, "root", 1);
    e.Result = dirNode;
    And last the FTPListDetail class:
    public class FTPListDetail
    public bool IsDirectory
    get
    return !string.IsNullOrWhiteSpace(Dir) && Dir.ToLower().Equals("d");
    internal string Dir { get; set; }
    public string Permission { get; set; }
    public string Filecode { get; set; }
    public string Owner { get; set; }
    public string Group { get; set; }
    public string Name { get; set; }
    public string FullPath { get; set; }
    Now the main problem is that when i list the files and directories and display them in the treeView it dosen't get/display
    the files in the root directory. Only in the sub nodes.
    I will see the files inside hello and stats but i need also to see the files in the root directory.
    1. How can i get and list/display the files of the root directory ?
    2. For the test i tried to add to a List<string> the items in var files to see if i get the root files at all.
       This is what i tried in the CreateDirectoryNode before it i added:
    private List<string> testfiles = new List<string>();
    Then after var files i did:
    testfiles.Add(files.ToList()
    But this is wrong. I just wanted to see in testfiles what items i'm getting in var files in the end of the process.
    Both var files and directoryListing are IEnumerable<FTPListDetail> type.
    The most important is to make the number 1 i mentioned and then to do number 2.

    Risa no.
    What i mean is this. This is a screenshot of my ftp server at my host(ipage.com).
    Now this is a screenshot of my program and you can see that in my program i have only the directories hello stats test but i don't have the files in the root: htaccess.config swp txt 1.txt 2.png....all this files i don't have it on my treeView.
    What i want it to be is that on my program on the treeView i will also display the files like in my ftp server.
    I see in my program only the directories and the files in the directories but i don't see the files on the root directory/node.
    I need it to be like in my ftp server i need to see in my program the htaccess 1.txt 2.png and so on.
    So what i wrote in my main question is that in the var files i see this files of the root directory i just don't know to add and display them in my treeView(my treeView is treeViewMS1).
    I know i checked in my program in the method CreateDirectoryNode i see in the first iteration of the recursive that var files contain this root files i just dont know how to add and display them in my treeView.
    On the next iterations when it does the recursive it's adding the directories hello stats test and the files in this directories but i need it to first add the root files.

Maybe you are looking for

  • RE: [iPlanet-JATO] image button handling

    Hi Todd, from what I have seen so far on the Project they are just buttons on the page. In the interim, I modified RequestHandlingViewBase.acceptsRequest() to handle the matching of parameter and command child names. from if (request.getParameter(com

  • Need help on XML Publisher Template features

    Hi, We have a requirement to develop an Invoice Register report to display the invoices for a period. Details of each Invoice and Amount needs to be displayed in a single line. At the bottom of each page we have to display running total of the Invoic

  • Help needed with DWT file

    Please can you help, made initial template for my website but now I want to change template so I can edit meta tags (tags were originally an unetitable area). When I apply my example below in the DWT file and apply to all pages the meta tags disappea

  • Update from OS X 10.5.8 to get on the cloud  What do I do??

    I am running OS X 10.5.8   what and how do I update so I can get on the cloud??

  • Regarding Messages Display

    Hi.. I am taking the LIFNR as input through Select-Options. In the LPS i'm displaying his purchase header details. If no purchse data exists for particular vendor...then Im diaplaying  one information message. But after clicking on message popup box.