Serializable Vs Repeatable read

Hi All
What is the difference and exact usuage of the parameters  Serializable Vs Repeatable read in Sender JDBC Adapter?
Thanks
Sai

Hi Sai,
What is the difference and exact usuage of the parameters Serializable Vs Repeatable read in Sender JDBC Adapter?
Go through this links,
http://help.sap.com/saphelp_srm40/helpdata/pt/7e/5df96381ec72468a00815dd80f8b63/content.htm
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/6a90d6aa-0b01-0010-8a83-cf0e6c70dcce
Sender JDBC Adapter
/people/yining.mao/blog/2006/09/13/tips-and-tutorial-for-sender-jdbc-adapter
http://help.sap.com/saphelp_nw04s/helpdata/en/22/b4d13b633f7748b4d34f3191529946/frameset.htm
http://help.sap.com/saphelp_nw2004s/helpdata/en/22/b4d13b633f7748b4d34f3191529946/content.htm
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3867a582-0401-0010-6cbf-9644e49f1a10
/people/sap.user72/blog/2005/06/01/file-to-jdbc-adapter-using-sap-xi-30
/people/yining.mao/blog/2006/09/13/tips-and-tutorial-for-sender-jdbc-adapter
/people/sameer.shadab/blog/2005/10/24/connecting-to-ms-access-using-receiver-jdbc-adapter-without-dsn
/people/saravanakumar.kuppusamy2/blog/2005/01/19/rdbms-system-integration-using-xi-30-jdbc-senderreceiver-adapter
Thanks,
Satya
Edited by: SATYA KUMAR AKKARABOYANA on May 8, 2008 4:52 PM

Similar Messages

  • Isolation level: repeatable read vs read stability.

    I was going through the following link [http://www.developer.com/print.php/3706251] about database isolation levels. There was a statement:
    In Read Stability, only rows that are retrieved or modified are locked, whereas in Repeatable Read, all rows that are being referenced are locked.
    What is meant by "all rows that are being referenced".
    According to my understanding in case of repeatable read, the table is locked. Is this understanding correct?
    Edited by: user476453 on Oct 29, 2010 2:03 AM

    This article is referencing DB2 isolation levels and not Oracle ones: isolation levels are standardized in SQL but practically they can be very different from one database to another. For Oracle please refer to http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/consist.htm#CNCPT621.
    Your DB2 question should be posted on DB2 forum and not on an Oracle forum.

  • SP2013 Bug Report: Health Reports - You can only specify the READPAST lock in the READ COMMITTED or REPEATABLE READ isolation levels.

    There appears to be an error when trying to view Health Reports from Central Administration. A simple fix within a SharePoint Stored Procedure fixes the problem. I get the error message "You can only specify the READPAST lock in the READ COMMITTED or
    REPEATABLE READ isolation levels." when just trying to click "View Health Reports" --> Go in CA.
    I have found the same problem in some blog posts which leads me to believe this is a bug:
    Problems Viewing Health Reports in SharePoint 2013
    From the blog post "I managed to work around it by altering the
    proc_GetSlowestPages stored procedure and commenting out the
    WITH (READPAST) line. "
    This also worked for me. It would be great if a fix could be released for this problem as it seems to cause other problems as well (site analytics freezes).

    Hi Dennis
    Hope you had found the hotfix and installed it.
    For the benefit of others who visit this thread SharePoint Server 2013 Client Components SDK hotfix package addresses this issue.http://support.microsoft.com/kb/2849962
    Regards
    Sriram.V

  • You can only specify the READPAST lock in the READ COMMITTED or REPEATABLE READ isolation levels

    Hi, I have a piece of code that works fine in SSMS as T-SQL. When I put the T-Sql inside a SP, I get the error :
    You can only specify the READPAST lock in the READ COMMITTED or REPEATABLE READ isolation levels
    The script starts is as follows (only select)
    SET NOCOUNT ON
    Set Transaction Isolation Level Read Committed
    Set Deadlock_Priority Low
    Select......
    From MyTable WITH (NOLOCK)
    GROUP BY .....
    Order BY ....
    works fine as I said in SSMS as T-SQL but the SP generates the following
    Msg 650, Level 16, State 1, Procedure usp_TotalMessages, Line 15
    You can only specify the READPAST lock in the READ COMMITTED or REPEATABLE READ isolation levels.
    By the way, when it says line 15, from where we should start counting, is it from The USE DB statement which includes comments as well as Set ANSI....or should we start counting from the Alter PRocedure statement
    Thanks in advance

    Set Transaction Isolation Level Read Committed
    Set Deadlock_Priority Low
    Select......
    From MyTable WITH (NOLOCK)
    GROUP BY .....
    Order BY ....
    First you define transactionlevel = "Read Committed", then you use a query hint "NOLOCK" which is equivalent to "Read Uncommitted"; so what do you want now, committed or uncommitted, you have to decide.
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • Repeatable Reading variation in Noise & Vibration Analysies

    We are testing the Quality of the gears based on the noise and vibration produced when the bevel and the pinion gears mesh each other. We Using SV DAQ NI PCI-447 and connected accelerometer and microphone for 1st and 4th channel respectively. We are following signal harmonics method for the testing purpose. First, we acquire the data signals of both noise and vibration and process them by the same above method and judge wheatear the gear has some defects or not. (Based on the threshold values) The problem is, each time when we run the machine, the noise and vibration signals which we acquire will be different due to this final results (Harmonic values) are also changing. We need some filters or any other solution so that we acquire noise and vibration only for gear mesh. I uploaded a picture showing how we doing the test process for sound.
    Attachments:
    harmonic.png ‏35 KB

    If you are only concerned about peaks within a certain frequency range, you should adjust the Search Range in the settings of the Peak Search Express VI (on the Configuration tab).  That will tell the express VI to only return the peaks in the frequency range that you choose.
    To clean up your results, you may want to try some FFT averaging.  This might help if your peaks are jumping around a little bit due to a low peak threshold that might be picking up some noise.
    If you really would like to do some filtering, you could use a highpass filter to get rid of any low order (1x/2x/3x, etc) vibration that you might not care about.  From your picture, it looks like you have the Sound & Vibration Measurment Suite and you are comfortable using express VIs.  So, you could use the filter express VI from the palette here:  Sound and Vibration>>S&V Express Measurements>>Processing>>Filter.

  • JDBC adapter - update statement

    I have come to the conclusion that there is no direct connection between the select and update statement of a sender JDBC adapter, in terms of commit scope.
    According to SAP documentation:
    "The UPDATE statement must alter exactly those data records that have been selected by the SELECT statement. You can ensure this is the case by using an identical WHERE clause. (See Processing Parameters, SQL Statement for Query, and SQL Statement for Update below)."
    But my point is: if select statement retrieves e.g. 5 rows based on a where condition, then the update statement could find 6 rows to update, if a row was inserted a split second after the select, but before the update. Result : a row is lost...
    I don't think the select statement puts a lock on the table(s) it accesses, and releases this lock after update has been committed. This would ensure integrity between select and update statement.
    Can anybody confirm or deny this ?

    Hi,
    Have you seen the<b> Isolation level for Transaction handling</b> in the sender JDBC adapter?
    Make the Isolation level as Serializable and repeatable Read and the db gets locked  anbd until Update happens, no Insertion can occur in the Split Second!
    http://help.sap.com/saphelp_nw04/helpdata/en/7e/5df96381ec72468a00815dd80f8b63/content.htm
    Regards,
    Bhavesh

  • Questions about sender JDBC adapter

    Normally we use sender jdbc adapter in this way:
    query statement:
    SELECT * FROM table WHERE processed = 0
    update statement:
    UPDATE table SET processed = 1 WHERE processed = 0;
    The update statement will be executed after the message has been successfully sent to XI.
    My questions is that what will happen in this scenario:
    A new record (processed=0) is added to the database table when a message is sent to XI with query statement but not finished?
    After the message is sent to XI successfully, XI will execute the update statement. Then  the new added reocrd will also be updated to 1 although it hasn't been sent to XI.
    Does XI have some control or special check for this issue?
    Regards
    Hui

    Hi,
    this can be handled by the ISOLATION LEVEL FOR TRANSACTION  setting of the sender Jdbc adapter.
    make the transaction handling to SERIALIZABLE  or REPEATABLE READ and then until the data is not read from the JDBC adapter and it is not updated by the UPDATE statement , no new rows will be allowed to be inserted into the database. The database will be write locked.
    http://help.sap.com/saphelp_nw04/helpdata/en/7e/5df96381ec72468a00815dd80f8b63/content.htm
    <i>Isolation Level for Transaction
    There are different levels of database transactions known as isolation levels. The isolation level determines how transactions running in parallel influence each other. The options correspond to the JDBC constants:
       Default (default setting of the respective database)
         None
        read_uncommitted (weakest setting)
        read_committed
         repeatable_read
         serializable (strongest setting)</i>
    Regards,
    Bhavesh

  • How to set the isolation level on Entity EJBs

    I am using 10.1.3.3 of the OC4J app server.
    I am creating an application that uses EJB 2.1.
    I am trying to set the isolation levels on the EJBs to either serializable or repeatable read.
    When i deploy the EAR file from the OC4J admin console, i can set the isolation level property on the EJB's however when i inspect the orion-ejb-jar.xml file I do not see the isolation level being set. Furthermore, i tried to manually change the isolation setting by editing the orion-ejb-jar.xml and adding the isolation="serialiable" attribute on the entity bean descriptor. I then stopped and restarted the server. I noticed that my change was no longer in the file.
    Can someone please let me know how to solve this problem and set the isolation level on Entity EJBs . Thanks

    I find it at ejb.pdf from BEA.
              The transaction-isolation stanza can contain the elements shown here:
              <transaction-isolation>
              <isolation-level>Serializable</isolation-level>
              <method>
              <description>...</description>
              <ejb-name>...</ejb-name>
              <method-intf>...</method-intf>
              <method-name>...</method-name>
              <method-params>...</method-params>
              </method>
              </transaction-isolation>
              "Hyun Min" <[email protected]> wrote in message
              news:3c4e7a83$[email protected]..
              > Hi!
              >
              > I have a question.
              > How to set the transaction isolation level using CMT in descriptor?
              >
              > The Isolation level not supported in CMT?
              >
              > Thanks.
              > Hyun Min
              >
              >
              

  • Isolation Level

    Hi All
    I have couple of queries on the isolation level for write connection
    <1> What is the default isolation level for write connection in toplink ?
    <2> If I use beginEarlyTransaction on an UOW instance, will it acquire the database row lock on its own or should I set the required isolation level through an API and What is the desired isolation level in this case SERIALIZABLE OR REPEATABLE READ ?
    Thanks in advance
    Regards
    Ben
    Message was edited by:
    [email protected]

    'serializable' should work with 8i/9i database.
    what version are you using

  • Pl/sql block reading reading table data from single point in time

    I am trying to figure out whether several cursors within a PL/SQL block are executed from within a Single Point In Time, and thus do not see any updates to tables made by other processes or procedures running at the same time.
    The reason I am asking is since I have a block of code making some data extraction, with some initial Sanity Checks before the code executes. However, if some other procedure would be modifying the data in between, then the Sanity Check is invalid. So I am basically trying to figure out if there is some read consistency within a PL/SQL, preventing updates from other processes to be seen.
    Anyone having an idea?.
    BR,
    Cenk

    "Transaction-Level Read Consistency
    Oracle also offers the option of enforcing transaction-level read consistency. When a transaction runs in serializable mode, all data accesses reflect the state of the database as of the time the transaction began. *This means that the data seen by all queries within the same transaction is consistent with respect to a single point in time, except that queries made by a serializable transaction do see changes made by the transaction itself*. Transaction-level read consistency produces repeatable reads and does not expose a query to phantoms."
    http://www.oracle.com/pls/db102/search?remark=quick_search&word=read+consistency&tab_id=&format=ranked

  • Single-statement 'write consistency' on read committed?

    Please note that in the following I'm only concerned about single-statement read committed transactions. I do realize that for a multi-statement read committed transaction Oracle does not guarantee transaction set consistency without techniques like select for update or explicit hand-coded locking.
    According to the documentation Oracle guarantees 'statement-level transaction set consistency' for queries in read committed transactions. In many cases, Oracle also provides single-statement write consistency. However, when an update based on a consistent read tries to overwrite changes committed by other transactions after the statement started, it creates a write conflict. Oracle never reports write conflicts on read committed. Instead, it automatically handles them based on the new values for the target table columns referenced by the update.
    Let's consider a simple example. Again, I do realize that the following design might look strange or even sloppy, but the ability to produce a quality design when needed is not an issue here. I'm simply trying to understand the Oracle's behavior on write conflicts in a single-statement read committed transaction.
    A valid business case behind the example is rather common - a financial institution with two-stage funds transfer processing. First, you submit a transfer (put transfer amounts in the 'pending' column of the account) in case the whole financial transaction is in doubt. Second, after you got all the necessary confirmations you clear all the pending transfers making the corresponding account balance changes, resetting pending amount and marking the accounts cleared by setting the cleared date. Neither stage should leave the data in inconsistent state: sum (amount) for all rows should not change and the sum (pending) for all rows should always be 0 on either stage:
    Setup:
    create table accounts
    acc int primary key,
    amount int,
    pending int,
    cleared date
    Initially the table contains the following:
    ACC AMOUNT PENDING CLEARED
    1 10 -2
    2 0 2
    3 0 0 26-NOV-03
    So, there is a committed database state with a pending funds transfer of 2 dollars from acc 1 to acc 2. Let's submit another transfer of 1 dollar from acc 1 to acc 3 but do not commit it yet in SQL*Plus Session 1:
    update accounts
    set pending = pending - 1, cleared = null where acc = 1;
    update accounts
    set pending = pending + 1, cleared = null where acc = 3;
    ACC AMOUNT PENDING CLEARED
    1 10 -3
    2 0 2
    3 0 1
    And now let's clear all the pending transfers in SQL*Plus Session 2 in a single-statement read-committed transaction:
    update accounts
    set amount = amount + pending, pending = 0, cleared = sysdate
    where cleared is null;
    Session 2 naturally blocks. Now commit the transaction in session 1. Session 2 readily unblocks:
    ACC AMOUNT PENDING CLEARED
    1 7 0 26-NOV-03
    2 2 0 26-NOV-03
    3 0 1
    Here we go - the results produced by a single-statement transaction read committed transaction in session 2, are inconsistent � the second funds transfer has not completed in full. Session 2 should have produced the following instead:
    ACC AMOUNT PENDING CLEARED
    1 7 0 26-NOV-03
    2 2 0 26-NOV-03
    3 1 0 26-NOV-03
    Please note that we would have gotten the correct results if we ran the transactions in session 1 and session 2 serially. Please also note that no update has been lost. The type of isolation anomaly observed is usually referred to as a 'read skew', which is a variation of 'fuzzy read' a.k.a. 'non-repeatable read'.
    But if in the session 2 instead of:
    -- scenario 1
    update accounts
    set amount = amount + pending, pending = 0, cleared = sysdate
    where cleared is null;
    we issued:
    -- scenario 2
    update accounts
    set amount = amount + pending, pending = 0, cleared = sysdate
    where cleared is null and pending <> 0;
    or even:
    -- scenario 3
    update accounts
    set amount = amount + pending, pending = 0, cleared = sysdate
    where cleared is null and (pending * 0) = 0;
    We'd have gotten what we really wanted.
    I'm very well aware of the 'select for update' or serializable il solution for the problem. Also, I could present a working example for precisely the above scenario for a major database product, providing the results that I would consider to be correct. That is, the interleaving execution of the transactions has the same effect as if they completed serially. Naturally, no extra hand-coded locking techniques like select for update or explicit locking is involved.
    And now let's try to understand what just has happened. Playing around with similar trivial scenarios one could easily figure out that Oracle clearly employs different strategies when handling update conflicts based on the new values for the target table columns, referenced by the update. I have observed the following cases:
    A. The column values have not changed: Oracle simply resumes using the current version of the row. It's perfectly fine because the database view presented to the statement (and hence the final state of the database after the update) is no different from what would have been presented if there had been no conflict at all.
    B. The row (including the columns being updated) has changed, but the predicate columns haven't (see scenario 1): Oracle resumes using the current version of the row. Formally, this is acceptable too as the ANSI read committed by definition is prone to certain anomalies anyway (including the instance of a 'read skew' we've just observed) and leaving behind somewhat inconsistent data can be tolerated as long as the isolation level permits it. But please note - this is not a 'single-statement write consistent' behavior.
    C. Predicate columns have changed (see scenario 2 or 3): Oracle rolls back and then restarts the statement making it look as if it did present a consistent view of the database to the update statement indeed. However, what seems confusing is that sometimes Oracle restarts when it isn't necessary, e.g. when new values for predicate columns don't change the predicate itself (scenario 3). In fact, it's bit more complicated � I also observed restarts on some index column changes, triggers and constraints change things a bit too � but for the sake of simplicity let's no go there yet.
    And here come the questions, assuming that (B) is not a bug, but the expected behavior:
    1. Does anybody know why it's never been documented in detail when exactly Oracle restarts automatically on write conflicts once there are cases when it should restart but it won't? Many developers would hesitate to depend on the feature as long as it's not 'official'. Hence, the lack of the information makes it virtually useless for critical database applications and a careful app developer would be forced to use either serializable isolation level or hand-coded locking for a single-statement update transaction.
    If, on the other hand, it's been documented, could anybody please point me to the bit in the documentation that:
    a) Clearly states that Oracle might restart an update statement in a read committed transaction because otherwise it would produce inconsistent results.
    b) Unambiguously explains the circumstances when Oracle does restart.
    c) Gives clear and unambiguous guidelines on when Oracle doesn't restart and therefore when to use techniques like select for update or the serializable isolation level in a single-statement read committed transaction.
    2. Does anybody have a clue what was the motivation for this peculiar design choice of restarting for a certain subset of write conflicts only? What was so special about them? Since (B) is acceptable for read committed, then why Oracle bothers with automatic restarts in (C) at all?
    3. If, on the other hand, Oracle envisions the statement-level write consistency as an important advantage over other mainstream DBMSs as it clear from the handling of (C), does anybody have any idea why Oracle wouldn't fix (B) using well-known techniques and always produce consistent results?

    I intrigued that this posting has attracted so little interest. The behaviour described is not intuitive and seems to be undocumented in Oracle's manuals.
    Does the lack of response indicate:
    (1) Nobody thinks this is important
    (2) Everybody (except me) already knew this
    (3) Nobody understands the posting
    For the record, I think it is interesting. Having spent some time investigating this, I believe the described is correct, consistent and understandable. But I would be happier if Oracle documented in the Transaction sections of the Manual.
    Cheers, APC

  • Serialization of complex hierarchies using PortableObject or PofSerializer

    Hi, I am starting with Coherence and I am having a hard time trying to introduce POF serialization in a complex hierarchy of classes... classes containing another classes that extend abstract classes that extend another abstract classes... All of them add up to 500 classes... I tried using PortableObject and Cohclipse to generate automatically the readExternal and writeExternal methods but I noticed that I missed that data of the attributes contained in the abstract classes. I thought that it might be because I repeated in the subclass the indexes for the attributes of the abstract class but I changed it and it is still not working. Besides I have problems with Enums...
    I just wanted to know which would be the best approach to this kind of task because I am trying to look for posible solutions in google but all I can find are just very simple examples... If you could give me any advise I would really appretiate it. Thanks

    In Coherence 3.7 you can use annotations for por serialization, Coherence introduced two annotation: Portable and PortableProperty, that could make it for you easy to add por serialization without writing read/write external methods. Also it can automatically set the indexes so you avoid messing witht hem.
    Thanks a lot,
    Carlos Curotto.

  • SimpleValidator only validate versions on updates with repeatabler read?!

    I was testing the SimpleValidator and my example seemed to indicated that it ONLY checks that the enlisted (old) version is the same as the locked (current) version for UPDATED objects if the isolation level is repeatable read (or presumably higher)! I would have expected this check to be done no matter what the isolation level was... I thought it was ONLY reads that was not verified in the read commited isolation level compared to read commited...
    It would also be nice to know if one can change how versions are calculated from cache objects simply by overriding the calculateVersion method (my tests indicates that this is possible but I would like to get it confirmed!). After introducing POF (using separate serializers) I was very happy to avoid having my cached busines objects implement Coherence classes or interfaces and would not like to break this again by using Versionable....
    /Magnus
    Edited by: MagnusE on Jan 17, 2010 4:09 PM

    I also rewrote the original program only using transaction maps (my first version assumed that I could create detectable conflicts using dirty reads/writes outside of a transacvtion map just as well as complete and fully commited transaction maps) but this did not change anything either:
    package txtest;
    import com.tangosol.util.TransactionMap;
    import com.tangosol.util.Versionable;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import java.io.Serializable;
    public class Test1_ver2 {
        public static final class Person implements Versionable, Serializable {
            private int version;
            private final String name;
            private final int age;
            public Person(String name, int age, int version) {
                this.age = age;
                this.name = name;
                this.version = version;
            public int getAge() {
                return age;
            public String getName() {
                return name;
            public Comparable getVersionIndicator() {
                return version;
            public void incrementVersion() {
                version++;
            public String toString(){
                return name + ", version = " + version;
        static final String CACHE_NAME = "dist-test";
        public static void main(String[] args) {
            try {
                // "Create" cache
                NamedCache cache = CacheFactory.getCache(CACHE_NAME);
                // Initialize cache
                cache.put(1, new Person("Foo", 23, 1));
                // Creatwe transaction map1  and select isolation level
                TransactionMap tx1 = CacheFactory.getLocalTransaction(CacheFactory.getCache(CACHE_NAME));
                tx1.setConcurrency(TransactionMap.CONCUR_OPTIMISTIC);
                // If I use TRANSACTION_GET_COMMITTED no exception is thrown but if TRANSACTION_REPEATABLE_GET is used
                // the validation throws an excpetion as expected...
                tx1.setTransactionIsolation(TransactionMap.TRANSACTION_REPEATABLE_GET);
                //tx1.setTransactionIsolation(TransactionMap.TRANSACTION_GET_COMMITTED);
                TransactionMap.Validator validator1 = new com.tangosol.run.jca.SimpleValidator();
                validator1.setNextValidator(tx1.getValidator());
                tx1.setValidator(validator1);
                // Start transaction
                tx1.begin();
                // Read an object from tx1...
                Person p1 = (Person) tx1.get(1);
                TransactionMap tx2 = CacheFactory.getLocalTransaction(CacheFactory.getCache(CACHE_NAME));
                tx2.setConcurrency(TransactionMap.CONCUR_OPTIMISTIC);
                tx2.setTransactionIsolation(TransactionMap.TRANSACTION_GET_COMMITTED);
                TransactionMap.Validator validator2 = new com.tangosol.run.jca.SimpleValidator();
                validator2.setNextValidator(tx2.getValidator());
                tx2.setValidator(validator2);
                tx2.begin();
                // Read same object using tx2, update it and write it back
                Person p2 = (Person) tx2.get(1);
                tx2.put(1, new Person(p2.getName(), p2.getAge() + 1, ((Integer) p2.getVersionIndicator()) + 1));
                tx2.prepare();
                tx2.commit();
                tx1.put(1, new Person("Fum", p1.getAge(), ((Integer) p1.getVersionIndicator()) + 1));
                // Prepare and commit
                tx1.prepare();
                tx1.commit();
            } catch (Throwable t) {
                t.printStackTrace();
    }Edited by: MagnusE on Jan 18, 2010 10:41 AM

  • DHCP with a WIFI repeater

    Hello,
    I flashed my old router (an ASUS RT NI6) with DD-WRT and set it up as a repeater.
    I can connect to the repeater with my desktop without problem, but my laptop somehow cannot. I usually run Gnome with NetworkManager. The error reported by NetworkManager was a timeout to get an address.
    Since I couldn't get more information, I tried to manually connect to the wifi following these instructions: https://wiki.archlinux.org/index.php/Wireless_Setup
    But dhcpcd fails with
    michel@xone:~$ sudo dhcpcd wlp3s0
    dhcpcd[1482]: version 6.1.0 starting
    dhcpcd[1482]: wlp3s0: waiting for carrier
    dhcpcd[1482]: timed out
    dhcpcd[1482]: exited
    So I tried to go with dhclient
    michel@xone:~$ sudo dhclient -4 -d -v wlp3s0
    Internet Systems Consortium DHCP Client 4.2.5-P1
    Copyright 2004-2013 Internet Systems Consortium.
    All rights reserved.
    For info, please visit https://www.isc.org/software/dhcp/
    Listening on LPF/wlp3s0/8c:70:5a:ff:a4:08
    Sending on LPF/wlp3s0/8c:70:5a:ff:a4:08
    Sending on Socket/fallback
    DHCPDISCOVER on wlp3s0 to 255.255.255.255 port 67 interval 3
    DHCPDISCOVER on wlp3s0 to 255.255.255.255 port 67 interval 3
    DHCPDISCOVER on wlp3s0 to 255.255.255.255 port 67 interval 7
    DHCPDISCOVER on wlp3s0 to 255.255.255.255 port 67 interval 11
    DHCPDISCOVER on wlp3s0 to 255.255.255.255 port 67 interval 9
    DHCPDISCOVER on wlp3s0 to 255.255.255.255 port 67 interval 15
    DHCPDISCOVER on wlp3s0 to 255.255.255.255 port 67 interval 13
    No DHCPOFFERS received.
    No working leases in persistent database - sleeping.
    A little more information:
    lspci -k
    03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6205 [Taylor Peak] (rev 96)
    Subsystem: Intel Corporation Device c220
    Kernel driver in use: iwlwifi
    Kernel modules: iwlwifi
    ip addr
    2: wlp3s0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 8c:70:5a:ff:a4:08 brd ff:ff:ff:ff:ff:ff
    Running
    sudo ip link set wlp3s0 up
    Then I get with ip addr
    2: wlp3s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 8c:70:5a:ff:a4:08 brd ff:ff:ff:ff:ff:ff
    sudo iw dev wlp3s0 scan -- with the relevant wifi I want to connect to
    BSS be:ae:c5:c3:dc:22(on wlp3s0)
    TSF: 702057511 usec (0d, 00:11:42)
    freq: 2462
    beacon interval: 100 TUs
    capability: ESS ShortSlotTime (0x0401)
    signal: -19.00 dBm
    last seen: 2920 ms ago
    Information elements from Probe Response frame:
    SSID: fromage
    Supported rates: 1.0* 2.0* 5.5* 11.0* 18.0 24.0 36.0 54.0
    DS Parameter set: channel 11
    TIM: DTIM Count 0 DTIM Period 1 Bitmap Control 0x0 Bitmap[0] 0x0
    ERP: <no flags>
    ERP D4.0: <no flags>
    Extended supported rates: 6.0 9.0 12.0 48.0
    HT capabilities:
    Capabilities: 0x187c
    HT20
    SM Power Save disabled
    RX Greenfield
    RX HT20 SGI
    RX HT40 SGI
    No RX STBC
    Max AMSDU length: 7935 bytes
    DSSS/CCK HT40
    Maximum RX AMPDU length 65535 bytes (exponent: 0x003)
    Minimum RX AMPDU time spacing: 8 usec (0x06)
    HT RX MCS rate indexes supported: 0-15
    HT TX MCS rate indexes are undefined
    HT operation:
    * primary channel: 11
    * secondary channel offset: no secondary
    * STA channel width: 20 MHz
    * RIFS: 1
    * HT protection: nonmember
    * non-GF present: 0
    * OBSS non-GF present: 1
    * dual beacon: 0
    * dual CTS protection: 0
    * STBC beacon: 0
    * L-SIG TXOP Prot: 0
    * PCO active: 0
    * PCO phase: 0
    WMM: * Parameter version 1
    * u-APSD
    * BE: CW 15-1023, AIFSN 3
    * BK: CW 15-1023, AIFSN 7
    * VI: CW 7-15, AIFSN 2, TXOP 3008 usec
    * VO: CW 3-7, AIFSN 2, TXOP 1504 usec
    The repeater used to have a WPA2 with a pre shared key, but I deactivated all security, so I just connect to the wifi with
    sudo iw dev wlp3s0 connect fromage
    Trying dhcpcd
    michel@xone:~$ sudo dhcpcd wlp3s0
    dhcpcd[1482]: version 6.1.0 starting
    dhcpcd[1482]: wlp3s0: waiting for carrier
    dhcpcd[1482]: timed out
    dhcpcd[1482]: exited
    Trying with dhclient
    michel@xone:~$ sudo dhclient -4 -d -v -1 wlp3s0
    Internet Systems Consortium DHCP Client 4.2.5-P1
    Copyright 2004-2013 Internet Systems Consortium.
    All rights reserved.
    For info, please visit https://www.isc.org/software/dhcp/
    Listening on LPF/wlp3s0/8c:70:5a:ff:a4:08
    Sending on LPF/wlp3s0/8c:70:5a:ff:a4:08
    Sending on Socket/fallback
    DHCPDISCOVER on wlp3s0 to 255.255.255.255 port 67 interval 6
    DHCPDISCOVER on wlp3s0 to 255.255.255.255 port 67 interval 7
    DHCPDISCOVER on wlp3s0 to 255.255.255.255 port 67 interval 14
    DHCPDISCOVER on wlp3s0 to 255.255.255.255 port 67 interval 21
    DHCPDISCOVER on wlp3s0 to 255.255.255.255 port 67 interval 13
    No DHCPOFFERS received.
    Unable to obtain a lease on first try. Exiting.
    I also made sure that I deactivated wicd, NetworkManaget, netctl-auto, dhcpcd before doing this.
    I also tried to set a static ip since dhcp is failing
    ip addr add 192.168.0.200/24 dev wlp3s0
    ip route add default via 192.168.69.1
    But that still doesn't work.
    Am I missing something? Am I suppose to set a special mode for my card for a repeater?
    I tried to look at the conf file on my desktop, but found nothing special.
    I can connect with a wire to the repeater, it works fine too.
    If anyone has an idea for something else to try, I would love to hear it : )
    Thanks!

    If your DHCP is half way decent and you aren't running more than one DHCP server on a single network it should work.  Some routers seem to be a little fussy when it comes to linux clients.  Of course, Arch has a good dhcp server if your router isn't forgiving.  But this problem would not be existing on the repeater.

  • Mac Mini won't boot, repeating startup chime

    This afternoon I noticed that my Mac Mini (1.5 GHz Intel Core Solo) was acting sluggish. I opened up activity monitor and saw that the process 0 (kernel_task ?) was using about 50% of the CPU. Assuming that it was in the middle of something, I left it alone for another 10 minutes. Still 50% red in the CPU Usage graph in Activity Monitor. I then turned the computer off. When I turn it back on, I get a gray screen, one normal startup chime, followed by somewhat quieter startup chimes about 1 -2 seconds apart. I've tried resetting the PMU, starting up with no USB connections, safe mode, by CD, etc. I don't think the keyboard is even recognized. I know that works, because I tested it on my laptop.
    Any suggestions?
    Thanks!

    OK, well it would not make any difference that the install disk was already inserted - as long as the Option key was being held down at the right moment, the system would go into the boot loader screen even if there were no bootable volumes.
    Make sure the Option key is down immediately you hear the first chime. If you've been doing that, it would be an indication of a more serious fault.
    Try an SMC reset to make sure the system hasn't just corrupted the management controller, then try again.
    An SMC reset can be accomplished as follows:
    -From the Apple menu, choose Shut Down (or if the computer is not responding, hold the power button until it turns off).
    -Unplug all cables from the computer, including the power cord and any display cables.
    -Wait at least 15 seconds.
    -Plug the power cord back in, making sure the power button is not being pressed at the time.
    -Then reconnect your keyboard and mouse to the computer.
    -Press the power button on the back to start up your computer.
    If that doesn't work, there's a hardware fault. In which case the nature and pattern of sounds you hear after the initial chime may be the clue, since these may be the system giving an audible report from the power-on self test. Are these regularly repeating beeps? rather than chimes? If so, you should hear a pattern of 1, 2 3 or 4 of them followed by a second pause, then repeating.

Maybe you are looking for

  • HTTP Adapter outbound (SSL) processing

    I am trying to send a XML message (an Invoice) from XI to an external Customer via HTTP Adapter. The site I am posting the message to is SSL. I have installed the Customer's Certificate via STRUST under SSL Client (Standard) and can see it in the cer

  • Problem with datagrid

    Hi, I am doing an application i dont know how to get the data from combobox to datagrid. Let us take i have some data like arrived and not arrived in my combo box and in datagrid i have one column. If i select arrived in my combobox it should be disp

  • Acrobat is not listed in my application manager, how do I install it?

    As far as I can tell, Acrobat is supposed to be included in the Creative Cloud membership, but it is not installed on either of my computers running CC, nor is it listed in the application manager for installation. I have one computer running Windows

  • Who creates the Photo Box album in iPhoto iOS 7.1?

    I just noticed an album in iPhoto iOS 7.1 that is titled "Photo Box."  How is it created?

  • Power/Video Question

    I ordered an MSI G4ti4200 128 MB the other day, it'll get here next week. I've read about checking the power supply to make sure it'll take care of both the mobo and the vid card alright, and I should be fine there, with my Enermax 430 W, although I