ACI to read/write a single BRANCH ...

Hello gurus !
My situation is a tree like this:
o=entry
+ou=one
+ou=two
+ou=three
+ou=four
I would like to create a user able to modify/read/etc.etc. ONLY ou=four contents...
I've placed a uid=fouradmin, ou=four,o=entry and I've created an ACI to let it do everything under ou=four.
Now, if I create a new ACI in ou=four to EXCLUDE access to ANYONE, this ACI became the stronger and I'm not able to do nothing with uid=fouradmin......
How can I bypass this situation and allow ONLY to uid=fouradmin complete access to ou=four and subsequential ?
Thanks,
silvio
I forgot to say that o=entry ( and subsequentials ) has the standard read only accett to all users...
Thanks,
silvio
Message was edited by:
infocamere

Access is denied unless there is an ACI that specifically grant access...
You should consider not using the DENY acis at all.
If what you want is that only fouradmin to have access to ou=four, do it so:
- Aci in ou=four grants all rights to fouradmin.
- Aci in o=entry (top) should use (target != "*ou=four,o=entry) grant access to anyone.
Ludovic.

Similar Messages

  • Read/Write using single adapter

    How can i move the files from one location to another location using single file adapter?

    Adapter can have only one operation either of the below.
    Read, Write, Synchronous and Listing
    To move the files you can refer the below URL and navigate to the below topic
    http://docs.oracle.com/cd/E23943_01/integration.1111/e10231/adptr_file.htm#CIACJFHF
    *4.5.11 Copying, Moving, and Deleting Files*
    - It is considered good etiquette to reward answerers with points (as "helpful" - 5 pts - or "correct" - 10pts).
    Thanks,
    Vijay

  • Execution/exit of DIO single read/write.vi

    Hi all,
    System - Windows NT4.0, LabView 7.0, PCI-DIO-96 card.
    Info -
    I am using the "DIO single read/write.vi" to update ports on a custom board. The Labview code has an outer For Loop that executes 64 times (to update the ports 64 times). Inside the loop is a sequence structure. The first sequence frame uses PPI B Port A of the DIO-96 in output Mode 0 (no handshake) to update one target port. The next frame uses PPI A Port A in output Mode 1 (handshake) to update a different target port. The DIO PPIs are always used in the same Mode and the same data direction.
    Question 1 -
    When the vi executes, does it look at the iteration input and leave the PPI config register alone if the iteration input is nonzero? Specifical
    ly, during my system init routine, the input is wired to the iteration terminal of the For Loop. Later in the code (after exiting init) the vi is used to do updates in other structures. I want all subsequent executions of the vi to be the same configuration as initialized. If I tie a nonzero numeric constant to the iteration input of the vi during subsequent executions, will the vi leave the 8255 in the same configuration as when the init routine was exited?
    Question 2 -
    I need to wait for the handshake in the second sequence frame to complete before the sequence is exited. I assume that when the vi is used inside of a sequence, it is analagous to a subroutine call. Does the vi wait for the handshake to complete before returning? In other words, will the vi complete the handshake with my target port before the sequence frame is exited? If not, any suggestions on how to wait for the completion?
    TIA - Charlie

    Charlie,
    Yes, if the iteration input to DIO Single Read/Write.vi is greater than zero, configuration will not take place. Furthermore, the VI will only return once execution has completed.
    Good luck with your application.
    Spencer S.

  • Windows Server 2012 - Hyper-V - iSCSI SAN - All Hyper-V Guests stops responding and extensive disk read/write

    We have a problem with one of our deployments of Windows Server 2012 Hyper-V with a 2 node cluster connected to a iSCSI SAN.
    Our setup:
    Hosts - Both run Windows Server 2012 Standard and are clustered.
    HP ProLiant G7, 24 GB RAM, 2 teamed NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the primary host and normaly all VMs run on this host.
    HP ProLiant G5, 20 GB RAM, 1 NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the secondary host that and is intended to be used in case of failure of the primary host.
    We have no antivirus on the hosts and the scheduled ShadowCopy (previous version of files) is switched of.
    iSCSI SAN:
    QNAP NAS TS-869 Pro, 8 INTEL SSDSA2CW160G3 160 GB i a RAID 5 with a Host Spare. 2 Teamed NIC.
    Switch:
    DLINK DGS-1210-16 - Both the network cards of the Hosts that are dedicated to the Storage and the Storage itself are connected to the same switch and nothing else is connected to this switch.
    Virtual Machines:
    3 Windows Server 2012 Standard - 1 DC, 1 FileServer, 1 Application Server.
    1 Windows Server 2008 Standard Exchange Server.
    All VMs are using dynamic disks (as recommended by Microsoft).
    Updates
    We have applied the most resent updates to the Hosts, WMs and iSCSI SAN about 3 weeks ago with no change in our problem and we continually update the setup.
    Normal operation
    Normally this setup works just fine and we see no real difference in speed in startup, file copy and processing speed in LoB applications of this setup compared to a single host with 2 10000 RPM Disks. Normal network speed is 10-200 Mbit, but occasionally
    we see speeds up to 400 Mbit/s of combined read/write for instance during file repair
    Our Problem
    Our problem is that for some reason all of the VMs stops responding or responds very slowly and you can for instance not send CTRL-ALT-DEL to a VM in the Hyper-V console, or for instance start task manager when already logged in.
    Symptoms (i.e. this happens, or does not happen, at the same time)
    I we look at resource monitor on the host then we see that there is often an extensive read from a VHDX of one of the VMs (40-60 Mbyte/s) and a combined write speed to many files in \HarddiskVolume5\System Volume Information\{<someguid and no file extension>}.
    See iamge below.
    The combined network speed to the iSCSI SAN is about 500-600 Mbit/s.
    When this happens it is usually during and after a VSS ShadowCopy backup, but has also happens during hours where no backup should be running (i.e. during daytime when the backup has finished hours ago according to the log files). There is however
    not that extensive writes to the backup file that is created on an external hard drive and this does not seem to happen during all backups (we have manually checked a few times, but it is hard to say since this error does not seem leave any traces in event
    viewer).
    We cannot find any indication that the VMs themself detect any problem and we see no increase of errors (for example storage related errors) in the eventlog inside the VMs.
    The QNAP uses about 50% processing Power on all cores.
    We see no dropped packets on the switch.
    (I have split the image to save horizontal space).
    Unable to recreate the problem / find definitive trigger
    We have not succeeded in recreating the problem manually by, for instance, running chkdsk or defrag in VM and Hosts, copy and remove large files to VMs, running CPU and Disk intensive operations inside a VM (for instance scan and repair a database file).
    Questions
    Why does all VMs stop responding and why is there such intensive Read/Writes to the iSCSI SAN?
    Could it be anything in our setup that cannot handle all the read/write requests? For instance the iSCSI SAN, the hosts, etc?
    What can we do about this? Should we use MultiPath IO instead of NIC teaming to the SAN, limit bandwith to the SAN, etc?

    Hi,
    > All VMs are using dynamic disks (as recommended by Microsoft).
    If this is a testing environment, it’s okay, but if this a production environment, it’s not recommended. Fixed VHDs are recommended for production instead of dynamically expanding or differencing VHDs.
    Hyper-V: Dynamic virtual hard disks are not recommended for virtual machines that run server workloads in a production environment
    http://technet.microsoft.com/en-us/library/ee941151(v=WS.10).aspx
    > This is the primary host and normaly all VMs run on this host.
    According to your posting, we know that you have Cluster Shared Volumes in the Hyper-V cluster, but why not distribute your VMs into two Hyper-V hosts.
    Use Cluster Shared Volumes in a Windows Server 2012 Failover Cluster
    http://technet.microsoft.com/en-us/library/jj612868.aspx
    > 2 teamed NIC dedicated to iSCSI storage.
    Use Microsoft MultiPath IO (MPIO) to manage multiple paths to iSCSI storage. Microsoft does not support teaming on network adapters that are used to connect to iSCSI-based storage devices. (At least it’s not supported until Windows Server 2008 R2. Although
    Windows Server 2012 has built-in network teaming feature, I don’t article which declare that Windows Server 2012 network teaming support iSCSI connection)
    Understanding Requirements for Failover Clusters
    http://technet.microsoft.com/en-us/library/cc771404.aspx
    > I have seen using MPIO suggests using different subnets, is this a requirement for using MPIO
    > or is this just a way to make sure that you do not run out of IP adressess?
    What I found is: if it is possible, isolate the iSCSI and data networks that reside on the same switch infrastructure through the use of VLANs and separate subnets. Redundant network paths from the server to the storage system via MPIO will maximize availability
    and performance. Of course you can set these two NICs in separate subnets, but I don’t think it is necessary.
    > Why should it be better to not have dedicated wireing for iSCSI and Management?
    It is recommended that the iSCSI SAN network be separated (logically or physically) from the data network workloads. This ‘best practice’ network configuration optimizes performance and reliability.
    Check that and modify cluster configuration, monitor it and give us feedback for further troubleshooting.
    For more information please refer to following MS articles:
    Volume Shadow Copy Service
    http://technet.microsoft.com/en-us/library/ee923636(WS.10).aspx
    Support for Multipath I/O (MPIO)
    http://technet.microsoft.com/en-us/library/cc770294.aspx
    Deployments and Tests in an iSCSI SAN
    http://technet.microsoft.com/en-US/library/bb649502(v=SQL.90).aspx
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support

  • How to: Create a Program with the rs232 Device -Magcard Reader Writer

    Hi Guys!. 
    Im New in using VB.net 2010 express 
    and it is my first time to do a project with a device needed to incorporate with it.
    I have a device Magnetic Card Reader writer and a i want to create a connection and UI 
    that interacts with the device alone without using the default application and process.start command.
    the main problem i want to be solve is to perform the connection and commands on a single form by allowing the user to read and write the data on a single form. 
    what i want to do is to create a main form that executes the commands needed to activate the event or allows the user to use the device w/o using the software. with the use of text box and button, while the read executes automatically if the card is swipe
    to it end fill out the focus textbox given.
    i have here a document that discuss commands for the device and i think it is needed to successfully connect all the process from device to the system
    can you help me to do this project? tnx.. :)

    Hi,
    Welcome to MSDN.
    I am afraid that this is not the proper forum for this issue, since each  Magnetic Card Reader writer has its API for developers.
    You could consider getting supports by connecting with the publisher of that Magnetic Card Reader writer which should have the sample about using its API.
    In addition, I did a research, you could refer to
    Build a .NET Class for Serial Device Communications with P/Invoke to get how to communicate with that serial device.
    Thanks for your understanding.
    Regards.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Lion server file sharing issue with windows API read/write ini file (GetPrivateProfileString)

    Hello,
    I try to config lion server as file server for a windows application we use at work. All other computers are windows 7 or XP, lion server is the only mac. I choose lion server because it's size, quality and personal love of apple products.
    10.7.2 lion server's samba file sharing works almost perfectly with all my windows machines, I can copy, delete, modify any text files or office files without any issue, but the most important windows application for my business doesn't work with samba file sharing. After some digging, I found it is because windows program can't read or write INI file stored on lion share. Windows API GetPrivateProfileString always returns empty if the INI file is store on lion share.
    You can download a small application for read/write windows INI file from codeproject.com to test this problem:
    http://www.codeproject.com/KB/files/ini.aspx
    I can open/edit the in file using any text editor without any problem. The only problem is with those windows APIs. ACL is turned on for my lion share and assigned "delete" rights to samba users.
    I install samba3 on the same server; it works perfectly with windows API. My windows program also works. Looks like there is something wrong with lion server's sambax.
    I'd prefer to use built-in samba even I have samba3 working. Built-in samba is very immature right now, but considered how young it is, I will give apple some time to make it mature.
    Does anyone have same issue or knows how to fix it?
    Thanks,
    Michael.

    All the memory is fine. The server rarely if ever goes down when there are only around 10-12 users connected. When there are 20+ users connected and working heavily it goes down often. When I say working heavily, I mean they are transferring huge files to the SAN (100GB+), sometimes 5 at a time per user, and there are a bunch of others who are reading large video files at a minimum of 220MB/sec from the SAN.
    Though this worked on Snow Leopard without any issues, Lion just doesn't seem to be able to handle it. The odd thing is, on Snow Leopard there was only a single 1GB ethernet connection to a NAS system, whereas with Lion we have a much more powerful machine with a 6-port 10GB ethernet card and a 4 lane 8GB fiber card to a true SAN. You would think that the newer scenario with Lion would handle far more users with ease.
    So far, very disappointing with regards to Lion's file serving performance.

  • External Hard drive suddenly will only read! How can I make it read/write again?

    Hi folks,
    I'm working with a Seagate FreeAgent GoFlex 320gb External Hard Drive on the imacs here at my university. I have it formatted to ExFAT so that I could use it with my pc as well but I've never had to so far so it's only ever been used on the uni imacs.
    It's been fine for working on media projects until suddenly when i came back to uni after the Christmas break I went to continue working on a film project and realised the external hard drive now said read only??
    One thing I did was to put all of my work from last semester (which is everything that was on the hard drive into a single folder) could that have caused the problem?
    And does anyone know how i can reset my permissions to read/write without causing damage or risk to the files on the hard-drive as I can't clear the disk (i have no where else to store the 185gb of video footage).
    I'd really appreciate it if anyone could tell me how i can fix it? I know how to do all of this on my pc but i've no idea with the mac as i only use it for my media projects here in the university.
    Thanks!
    DrF ;-)

    Out of curiosity , will this method also work for an external drive formatted to NTFS.. I have a lot of data that I have nowhere to Float while re-formatting the drive , so If this method would allow me to keep the data and still fix the problem that would be great !!  At the moment the drive is formatted to NTFS and permissions say Read/Write , but I can only Read.. Very Frustrating as my mac drive is running low on space and performance is being affected.. Any help appreicated !!

  • Why doesn't Photoshop support read/write of .mpo files?

    I am actually blown away that I cannot find a single Photoshop plugin that reads and/or saves .mpo files. Does somebody know why? And why isn't anyone talking about this format? I find it hard to believe that no one in the entire Photoshop Windows forum has ever wondered about Photoshop's complete lack of or interest in support for .mpo files.
    If nobody knows how to read/write .mpo files directly through Photoshop, perhaps someone could point me to some documentation for writing a plugin for it. It's a really really simple file format--there's no reason that a plugin shouldn't be available already.
    Thanks in advance for your support.
    Jase

    Well wave of the future or not. its beening pushed hard. That is in a way good for us the consumers.. choice is always good.
    3d tv's and monitors require at least 120hz in order to function. what does that mean for people who spend all their time in front of a monitor?
    Bringing about the availablilty of 120hz and beyond flat panels.
    Many are people are prone to getting migrains due to refresh rates 60 hz or less, Most people getting migrains from sitting at work in front of a 60 hz monitor all day long, and do not even realize it.
    I happen to be one of those people who get migrains, especially if I am doing artwork for 8 - 10 hours.
    I just picked up a 120 hz monitor like 2 weeks ago cause I am a techno junky and love my toys, it is the asus one that comes with the Nvidia glasses.
    There is nothing wrong with my other monitor, a 24 inch HP monitor. Its a 1920 x 1200 5ms responce time 60hz monitor.
    I didn't think i would really even notice the difference between a 120 hz and 60 hz monitor.
    Boy ohh boy was I ever mistaken about not Noticing. sitting side by side, to me the picture difference is amaizing. Now i have not had an oppertunity to do a real weekend artwork fest but that will come with time.
    The 3d features with the glasses on, I think are very cool in games its extreemly noticable since you have more control over what it is you are looking at. I am not sure i would play a game end to end for hours and hours in 3d. i think that would likely make my head explode but we'll see about that too and if not my be running into walls when i am done since you tend to loose perseption. not to mention 3d glassses make it so you see each picture at 60 hz in one eye 60 hz in the other putting you back to a 60 hz refresh and migrains.
    anyway way off subject, but to sum it up.
    Some good things can come from flavor of the day fads. Grant it some bad some times does too.. i mean look at bell bottoms....

  • Oracle coherence first read/write operation take more time

    I'm currently testing with oracle coherence Java and C++ version and from both versions for writing to any local or distributed or near cache first read/write operation take more time compared to next consecutive read/write operation. Is this because of boost operations happening inside actual HashMap or serialization or memory mapped implementation. What are the techniques which we can use to improve the performance with this first read/write operation?
    Currently I'm doing a single read/write operation after fetching the NamedCache Instance. Please let me know whether there's any other boosting coherence cache techniques available.

    In which case, why bother using Coherence? You're not really gaining anything, are you?
    What I'm trying to explain is that you're probably not going to get that "micro-second" level performance on a fully configured Coherence cluster, running across multiple machines, going via proxies for c++ clients. Coherence is designed to be a scalable, fault-tolerant, distributed caching/processing system. It's not really designed for real-time, guaranteed, nano-second/micro-second level processing. There are much better product stacks out there for that type of processing if that is your ultimate goal, IMHO.
    As you say, just writing to a small, local Map (or array, List, Set, etc.) in a local JVM is always going to be very fast - literally as fast as the processor running in the machine. But that's not really the focus of a product like Coherence. It isn't trying to "out gun" what you can achieve on one machine doing simple processing; Coherence is designed for scalability rather than outright performance. Of course, the use of local caches (including Coherence's near caching or replicated caching), can get you back some of the performance you've "lost" in a distributed system, but it's all relative.
    If you wander over to a few of the CUG presentations and attend a few CUG meetings, one of the first things the support guys will tell you is "benchmark on a proper cluster" and not "on a localised development machine". Why? Because the difference in scalability and performance will be huge. I'm not really trying to deter you from Coherence, but I don't think it's going to meet you requirements when fully configured in a cluster of "1 Micro seconds for 100000 data collection" on a continuous basis.
    Just my two cents.
    Cheers,
    Steve
    NB. I don't work for Oracle, so maybe they have a different opinion. :)

  • NFC tags read/write operations on low level

    Hi,
    I know this is little bit offtopic question - but since you are experts in the area I will try to ask you probably a pretty simple question:
    1/ I would like to know which protocol is used for the read/write operations to the NFC tags are used. According to my understanding after the tag is placed on the NFC reader (NFC phone, USB reader), it is powered and set to the ready state. Then the application protocol for read/write operation is used. As I think the exact format and the content of commands used for read/write is not specified in ISO 14443 and it is dependent on a tag hardware/manufacturer and will be different for FeliCa/Mifare/Innovision/etc. tags, so there is no way how to handle NFC tags read/write operations with the single implementation. Is that assumption correct?
    2/ Are there any tags, which supports the APDU 7816-4 commands for read/write operations?
    Thank you for reply
    Kind regards,
    STeN

    hello,
    you have to read the NFC forum specs. all of this will be better explained than by me.
    more than one protocol are used according the the contactless front end configuration and abilities. It includes ISO14443-A, ISO14443-B and Felica. Sometimes other protocols are also available, for example Innovatron (not Innovision lol)
    Mifare is not a protocol, it is a line of NXP products. These products use the lower layers of the ISO14443-A protocol specification.
    There are 4 types of tags
    1) using the lower layers of ISO14443-A
    2) using the lower layers of ISO14443-B
    3) something related to felica?
    not sure exactly about these 3, you have to read the specs. Everything is clearly understandable, not like ETSI.
    4) something using ISO7816-4 commands on top of ISO14443 A or B or others. You have SELECT, READ BINARY, UPDATE BINARY. You can implement that using javacard, I did it and it works. You need two binary files, that can be hardcoded.
    Regards
    Sebastien

  • Access 2013 Read/Write Connection on 365

    Have web tables all set up on 365 account.  Have set the connection management to Read/Write.  But I can't figure out how to link to them from a desktop Access app.
    From within the desktop app if I use the External Data tab - to link to them - attempting the Sharepoint Lists - - - the tables don't appear.  Am pretty sure because the web tables on 365 are not Sharepoint Lists.
    Have searched on this for awhile and am not finding instructions anywhere.  ...  One can get a Reporting desktop app set up with a single button but it is read only......sure wish it was this easy for read/write..... 
    Appreciate any help......

    It still possible to use a 2010 web application in 2013 that is using SharePoint tables, so some confusing here is legitimate.
    In your case as you WELL note you using 2013, and thus as you WELL note it makes no sense to look for tables in SharePoint when clearly that is not where they reside.
    To find the connection string in the web application just go file.
    You see your application name, and below that
    Data Connectivity
    In that you see the server name, and the database name. You can use your 365 Account as the logon + password.
    However, for a read/write? In the above area, click on "manage" beside the connections.
    In the list of options for connections such as "From any location", and the "Enable Read-write" connections, there is an option called:
    View Read-Write Connection Information
    Click on the above option. This will result in a dialog box that gives you database name, and the user + password required to link to that database in read/write mode.
    So you simply create a regular plane Jane desktop database.  You then choose external data and then choose ODBC. This will let you create a link to SQL server. How this works much the same as it always has in Access (more then 16+ years).  You
    then type in the correct server + user information from above web application.
    I do think you need to choose the new ODBC 11 (Native sql server driver) for this to work. So, don't choose SQL from the list of drivers, but choose the new native or ODBC 11 drivers when setting up the ODBC connection from Access.
    Best regards,
    Albert D. Kallal (Access MVP)
    Edmonton, Alberta Canada

  • Maxl script unable to read/write to a folder

    When logged in to EAS, we are trying to run a maxl script to read/write to a report script in a particualr folder on the server. Even though the eas login username has write permissions to that folder, the maxl keeps saying: unable to open file.
    Is there any internal Hyperion service account that should specifically have write permissions to the folder?
    here is the maxl:
    SPOOL ON TO 'D:\Oracle\WE.log;
    LOGIN 'username' 'password' on 'server1';
    ALTER SYSTEM LOAD APPLICATION 'Plan1';
    ALTER APPLICATION 'Plan1' LOAD DATABASE 'Main';
    export database Plan1.Main using report_file 'D:\Oracle\WE.rep' to data_file 'D:\Oracle\WE.rpt';
    SPOOL ON TO 'D:\Oracle\REV.log';
    When we run it, we get the following error:
    It gets past the alter systen and alter application commands, and then:
    'Unable to open file ['D:\Oracle\WE.rpt']

    Have you tried running it from a maxl session and not EAS, if you are testing using maxl you may need double backslashes instead of single ones.
    Have you tried with the report script in the database app folder and using the server option - export database Plan1.Main using server report_file 'WE' to data_file 'D:\Oracle\WE.rpt';
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • LK6.5 Modbus read/write to many addresses at once

    LK6.5 Modbus read/write to many addresses at once. What happens as example; I have address 100 to 150 and 180-200. Addresses in between doesn't exist. Not all are connected. The modbus (Ethernet master) driver reads in a single command (in order to optimize) address 100-190.
    That is a nice feature, but the result is a reply like "not existing" because between 150-180 are not existing. No error is shown in lookout (communication works OK).
    I know I can do a workaround by setting maximum value per message to 1 or 2, but the project has 140 modbus objects with a total of around 4000 connections. I would like to use a big range in one command, but no addresses that are not existing. Beside of that, it looks like this is a problem that let the driver and/or lookout crash.
    Any one knows a fast solution? Is there another updated modbus driver?

    more inputs:
    The response from Modbus device actually doesn't specify which address has the problem if you read a range.
    When you read a range, and a certain address in this range doesn't exist, do you get no data in lookout?
    It depends on the Modbus device whether lookout gets error response or the good response. I don't see detailed definition in the Modbus specification. So, if a certain address in a range doesn't exist, the device may either return error or good response. But the problem is that the error reponse is just an error code. Lookout doesn't know which address has the problem. It may be a problem that you don't see alarm in Lookout.
    I don't see a way on lookout side to get a better behaviour. Maybe you can consult the device provider for any advice, such as what should the modbus master do, is the range read expected?
    Ryan Shi
    National Instruments

  • Single-branch git clone in PKGBUILD

    I have a github repo with several branches and I want to create a package based on the master branch. In a PKGBUILD I used "git clone https://github.com/some_account/some_repo --single-branch master" to clone only the master branch and not all the other thing. I would like to modify my PKGBUILD to use " source='git://github.com/some_account/some_repo.git' " and not directly use the above "git clone" command. How can I tell makepkg to clone olny the master branch and not all the repo in the later form?

    Allan wrote:That still downloads all branches...
    The only way is to manually downloads the source in the prepare function.
    Yes you are right. It still downloads the whole repo.
    Thanks
    Last edited by nikta (2013-12-11 15:10:39)

  • Extremely bad read/write latency on iSCSI datastore

    Hello,
    I have a single host in my test lab which is having very bad read/write latency to an iSCSI datastore.  All of my hosts have 1G ethernet, other hosts in the lab are not having this issue.  What can I check to help isolate this issue?  Are there any steps I can take to optimize the performance?

    I'm struggling with exactly the same problem, but on ESXi 4.1.
    It seems that zfs inflate IO. When you check disk activity you can see that underline zfs trash the disks, while it results in a modest activity within ntfs.
    I just cannot figure out how to cope with it.

Maybe you are looking for