Repository manipulations

Hi,
I am interested in manipulating HTML DB repository directly in database using PL/SQL or JAVA. Is it possible?
I would like to integrate HTML DB application with ADF JFACES and sometimes I will have to manipulate HTML DB repository from ADF JAVA code. Can it be done?
regards,
Cezary

Ok about "data" but I am talking about for instance
definition of master-detail form.
Can it be changed ? Not from HTML DB wizards but by
something like API?As John said, the HTMLDB applications you create (page definitions, report definitions, etc, etc...) are all stored in the database as regular data in regular tables in the FLOWS_XXXXXX schema. So, yes, it can be changed. As long as the application trying to do it is given appropriate schema permissions.
Is it JAVA API to manipulate that can change a
master-detail form?That's up to you. You can use Java or any other tool that can do DML in Oracle.
Earl

Similar Messages

  • About Multiple Repository manipulation !!!

    Hi! Everyone.
    I want to using the multiple repository on the different disk ?
    We are using 2 repositorys.
    I want to create a new repository on the different disk.
    Does Forte knows the new repository ?
    Is it possible ?
    We are using Forte2.0H5 & Forte2.0E2 on the VAX machine with VMS 6.0.
    If anyone knows above solution, any help would be appreciated.
    Regards,

    Hi! Everyone.
    I want to using the multiple repository on the different disk ?
    We are using 2 repositorys.
    I want to create a new repository on the different disk.
    Does Forte knows the new repository ?
    Is it possible ?
    We are using Forte2.0H5 & Forte2.0E2 on the VAX machine with VMS 6.0.
    If anyone knows above solution, any help would be appreciated.
    Regards,

  • Manipulating objects in WebDAV repository via BSP

    Hi community,
    I need to publish my documents uploaded from BSP page to a Unix machine running Slide as WebDAV repository.
    I found an ABAP interface (IF_HTTP_WEBDAV_RESOURCE ) that seems to do the job, but no example is supplied.
    Did someone already deal with this problem?
    Thank in advance,
    Stefano

    Hi
    <b>There are several references to the Interface  - IF_HTTP_WEBDAV_RESOURCE
    Look for the following Classes where this interface is used -></b>
    CL_HWR_SKWF_LOIO
    CL_HWR_SKWF_RESOURCE
    CL_O2_HTTP_WEBDAV
    CL_RCM_WEBDAV_RESOURCE
    CL_RCM_WEBDAV_RESOURCE_CONT
    CL_RCM_WEBDAV_RESOURCE_GSP
    CL_RR_RESOURCE   
    CL_SRM_WEBDAV_RESOURCE_BDV
    CL_HTTP_EXT_BSP_MIME
    CL_HTTP_EXT_WEBDAV_PUBLIC 
    CL_HWR_SKWF_APP
    CL_HTTP_EXT_WEBDAV_SKWF
    CL_HTTP_WEBDAV  
    CL_HTTP_WEBDAV_RSOD_TMPL
    CL_HTTP_WEBDAV_SKWF
    CL_HTTP_WEBDAV_SKWF_FS
    <u>Using SE24 transaction, you can find the code of all the relevant details in this case.</u>
    Hope this will defintely help.
    Regards
    - Atul

  • Manipulating Central repository

    Hi,
    I have a question over the Central repository in Data Services. Let's say, user2 (a local repository user) checked out a job say job_1 from the central repository. Then say , we deleted the user2 local repository( i.e deleted user2 from the database). Now, how do we change the checkout status of the job job_1(checked out to user2) in the central repository so that other users (other local repositories) can check out. We encountered a similar situation today, i would really appreciate if anybody can provide a solution for the above problem. thanks In advance.

    If you have deleted the local repository that have locked these objects out, possibly the only option to get them released is using the UPDATE script on the Central Repository database. It has happened to us as well and running the script below fixed the issue.
    1. Run the query below on your Central Repository database. You should see the list of checked out obejcts in the result:
    SELECT * FROM ALVW_OBJ_CINOUT
    WHERE STATE <> 0
    2. Identify those that you intend to released (only those which belong to the deleted Local Repository) and run a UPDATE script like below to update the State column value to 0.
    For example:
    UPDATE [Table Name]
    SET State = 0
    WHERE Name = xxxxx
    Based on the object type the Table Name is different. Below is the list of table names:
    AL_LANG (This table contains the info about Job obejcts, such as Job, WF, DF)
    AL_FUNCINFO (This table contains the info about custom functions)
    AL_PROJECTS (This table contains the info about Projects)
    AL_SCHEMA (This table contains the info about db schemas such as tables)
    Therefore based on the type of the checked it object type, you should run the UPDATE statement on one of more of the above tables. For example the script below unlock the dataflow called MyTest_DF:
    UPDATE AL_LANG
    SET State = 0,
            CheckOut_DT = NULL,
            CheckOut_Repo = NULL
    WHERE Name = 'MyTest_DF'    --- The name of the checked out object that we want to release.
    Note- Please MAKE SURE you have a back up from your Central Repository before doing the update, just in case if you needed to revert the changes back. It should be seen as the last resort to unlock the objects, next time you might be more careful to delete a local repositiry, ensure no obejct is checked out by it
    Tootia

  • What is the recommended way of connecting to repository out of WebDAV, RMI, JNDI and JCA connector ?

    What is the recommended way of connecting to repository out of WebDAV, RMI, JNDI, and JCA connector possibilities provided by CQ 5.5?

    Hi dp_adusumalli,
    I recognized your list of ~8 questions you posted at around the same time, as I received that same list in our customer implementation from Arif A., from the India team, visiting San Jose. :-)
    I provided him feedback for most of the questions, so please check back with Arif for that info.
    For this particular question, can you provide specifics for the types of interactions you are interested in?
    Understanding the kinds of things you need to achieve will help determine which of the CQ/CRX interfaces is best suited for the task(s).
    I've collated a few points on this subject on this page:
    Manipulating the Adobe WEM/CQ JCR
    Regards,
    Paul

  • Data conversion rule manager & repository

    Hi All,
    We are running a fairly large legacy data conversion project from IMS and DB2 to Oracle which will involve quite a variety range of conversion/mapping rules (complex data manipulation to simple mapping). These requirement and rules involve both tech and business collaboration. Are there any data conversion rule managers tool out there that provide these type of capabilities:
    1. Data conversion rule repository and version control
    2. collaboration features (e.g. collaboration editing, commenting, decision capture, approval...etc)
    We have check someed of the tools out there, and they are either generic requirement gathering tools, which is basically a few tabs with drop downs and a big text box, or some end-to-end data conversion automation solutions, which focus the execution of the data conversion instead of managing & tracking conversion rules.
    Can anyone you help? please let me know if there is anything comes close that is practical and useful. Thank you.

    Tubby, thank you for the info. Yes, we actually use ODI as a DEV tool for data conversion rule implementation and execution between our staging DB and target DB. However, ODI does not really have the capability to allow our tech and business to collaborate together and focus on "getting all these rules defined correctly". Having say that, ODI does have many very useful side features for our DEV to leverage and share information such as knowledge module and data mapping...etc
    We would hope to have a practical data rule management tool & repository that allows tech and business to work together and gather all the rules and build up all the knowledge base in 1 single place. DEV/QA can then take away and focus on the implementation and validation.
    Please let me know. Thank you very much.

  • Repository problems

    I have xml docs stored in the purchaseOrder table which I transferred there using xdb repository folders
    via ftp.
    I truncate table purchaseOrder.
    Now I don't see any row in purchaseOrder. But I still see the repository showing the xml doc in
    windows explorer(though not the contents).
    (1)Whats is a repository. Is it only metadata and the contents are in all cases stored in Database tables.
    (2)I am not able to remove the repository paths/links via any method
    (sql error is given below, while i get similar error in win explorer and oracle en. manager).
    SQL> delete from resource_view where any_path='/home/SCOTT/purchaseOrders/1999';
    delete from resource_view where any_path='/home/SCOTT/purchaseOrders/1999'
    ERROR at line 1:
    ORA-31110: Action failed as resource /home/SCOTT/purchaseOrders/1999 is locked
    by name
    ORA-06512: at "XDB.XDB_RVTRIG_PKG", line 0
    ORA-06512: at "XDB.XDB_RV_TRIG", line 9
    ORA-04088: error during execution of trigger 'XDB.XDB_RV_TRIG'
    (3)When I try to remove a resource from the repository for which corresponding content
    exists in the database then I have no problem.

    A resource is the general term (from the IETF WebDAV spec) that defines an object in a file / folder heirarchy. WebDAV defines a set of verbs for creating, manipulating and deleting resources, a set of meta data that a Dav server will maintain about each resource, and a communication protocl based on HTTP and XML for exchanging information between a WebDAV client and server.
    In the case of XML DB for non-Schema based documents the content and the meta data are both stored in tables owned by the user XDB schema. For Schema based content the XDB tables maintain the meta data and the content is stored in the default tables defined / created when the schema was registered. The location of these tables is, by default in the relational schema of the user who registered the tables.
    Resources are automatically created when WebDav or FTP clients send requests to the XML DB protocol handlers. THey can also be created via functions on the dbms_xdb PL/SQL package.
    A Resource container is simple a 'folder' or 'directory'. Again these can be created via WebDAV or FTP. They an also be created using dbms_xdb.createFolder()...
    I am not sure about what you mean by "In the specify a file at the URL option what value do we give...". Can you be more specific.
    I anwered the question about partitioning by folder location in your other post on this topic..... It's a possible future direction...

  • Installation of OWB Repository on RAC fails at 77%

    All,
    We are trying to install OWB (Version 10.2.0.3.33 on AIX) Repository on RAC and installation fails on
    77% completion - installation is performed on the AIX server.
    this is where it fails
    jvm.sh -classpath ../../lib/int/rtpplatform.jar:../../lib/int/rtpcommon.jar:../../../jdbc/lib/ojdbc14.jar:../../../lib/xmlparserv2.jar:../../../sqlj/lib/runtime12.jar:../../../jdk/jre/lib/rt.jar oracle.wh.runtime.platform.service.install.ServiceInstaller %host %port %serviceName %user %password
    Java Exception:
    [getConnection]....
    after executing the output
    [processSPAWN]: A spawned program error. Exception = java.lang.Exception: Error : java.sql.SQLException: ORA-29532: Java call terminated by uncaught Java exception: java.io.IOException
    Get the error, stop processing other tokens...
    java.lang.Exception: Error : java.sql.SQLException: ORA-29532: Java call terminated by uncaught Java exception: java.io.IOException
    oracle.wh.service.impl.assistant.Monitor.waitForNotify(ProcessEngine.java:1489)
    oracle.wh.service.impl.assistant.ProcessEngine.processSPAWN(ProcessEngine.java:878)
    oracle.wh.service.impl.assistant.ProcessEngine.processFileTokens(ProcessEngine.java:583)
    oracle.wh.service.impl.assistant.ProcessEngine.createOWBRepository(ProcessEngine.java:269)
    oracle.wh.ui.install.assistant.wizards.AssistantWizardDefinition$4.runTask(AssistantWizardDefinition.java:531)
    oracle.ewt.thread.TaskScheduler.runTask(Unknown Source)
    oracle.ewt.thread.TaskScheduler.processTask(Unknown Source)
    oracle.ewt.thread.TaskScheduler$TaskQueue.run(Unknown Source)
    oracle.ewt.timer.Timer.doRun(Unknown Source)
    oracle.ewt.timer.Timer.run(Unknown Source)
    java.lang.Thread.run(Thread.java:568)
    Exception = Exception occured in 'processSPAWN'. java.lang.Exception: Error : java.sql.SQLException: ORA-29532: Java call terminated by uncaught Java exception: java.io.IOException
    [executeOwbReposOrRuntime_advanced]: Error occurred during installation. Exception =java.lang.Exception: Exception occured in 'processSPAWN'. java.lang.Exception: Error : java.sql.SQLException: ORA-29532: Java call terminated by uncaught Java exception: java.io.IOException
    java.lang.Exception: Exception occured in 'processSPAWN'. java.lang.Exception: Error : java.sql.SQLException: ORA-29532: Java call terminated by uncaught Java exception: java.io.IOException
    oracle.wh.service.impl.assistant.ProcessEngine.processSPAWN(ProcessEngine.java:898)
    oracle.wh.service.impl.assistant.ProcessEngine.processFileTokens(ProcessEngine.java:583)
    oracle.wh.service.impl.assistant.ProcessEngine.createOWBRepository(ProcessEngine.java:269)
    oracle.wh.ui.install.assistant.wizards.AssistantWizardDefinition$4.runTask(AssistantWizardDefinition.java:531)
    oracle.ewt.thread.TaskScheduler.runTask(Unknown Source)
    oracle.ewt.thread.TaskScheduler.processTask(Unknown Source)
    oracle.ewt.thread.TaskScheduler$TaskQueue.run(Unknown Source)
    oracle.ewt.timer.Timer.doRun(Unknown Source)
    oracle.ewt.timer.Timer.run(Unknown Source)
    java.lang.Thread.run(Thread.java:568)
    Our RAC Environment:
    2 Nodes with an DB instance each:
    Host and Instance 1: Instance Name=SPBINT1 - Service Names: SPBINT and OWB1_SPBINT
    Host and Instance 2: Instance Name=SPBINT2 - Service Names: SPBINT and OWB2_SPBINT
    1. When we try to install Repository using Service name OWB1_SPBINT - installation fails with above java exception.
    2. If we try to install Repository using Service name SPBINT - installation is successful, but file
    rtrepos.properties is created but with no content. We have tried to re-create the rtrepos.properties file
    running sql script "reset_repository.sql" after manipulating table WB_RT_SERVICE_NODES setting service name
    to      OWB1_SPBINT. This fails with error message:
    ERROR at line 1:
    ORA-29532: Java call terminated by uncaught Java exception: java.io.IOException
    ORA-06512: at "XXOWB_REP.WB_RTI_UTIL", line 29
    ORA-06512: at "XXOWB_REP.WB_RTI_UTIL", line 93
    ORA-06512: at line 2
    All help is apreciated.
    Thanks
    Bjorn-Erik

    ------ 2009/10/1 updated by Peter Hwang Sun 64 OWB 10201 patched to 10204
    If you try to patch your OWB with 1020X and If you are running OWB in two-tier environment (OWB control center running seperately from DB server), you may want to check wb_rt_service_nodes table for its record.
    select * from wb_rt_service_nodes;
    NODE_ID INSTANCE_NUMBER HOST PORT SERVICE_NA ENABLED RUNTIME_VERSION SERVER_SIDE_HOME
    1 1 SunDB01 1521 test 1
    In our environment, last two fields need to be NULL.
    During 10204 OWB patch (app server binary), the process updated record and popylated 'RUNTIME_VERSION" with "Generated" and "SERVER_SIDE_HOME" with "/apps/owb" (OWB application server side Home).
    Once remove the aboe two, out OWB control center is running fine.
    ------ 2009/10/1 updated by Peter Hwang Sun 64 OWB 10201 patched to 10204

  • Merge repository

    I have problem merging the repository with the following procedure:
    - open the 1st repository that I want to merge on offline mode
    - click merge (from file menu)
    - select a dummy repository as the orginal repository
    - select the 2nd repository (fro merging) as the modified repository
    -click merge
    Pls advise how the procedure to get it working. I'm using 10.1.3.3

    Hi Daan bakboord,
    I've followed the great procedure describe at this URL :
    http://oraclebizint.wordpress.com/2007/11/22/oracle-bi-ee-101332-merging-repositories/
    My exact manipulations in details were in order :
    - First : Open the first current repository in offline mode (current_repository.rpd).
    - Second : Copy this current repository and select it (current_dummy_repository.rpd) as dummy repository for original repository.
    - Third : Choose the modified repository (modified_repository.rpd).
    - Fourth : Save merged as repository (current_repository_merged.rpd).
    And finally my saved merged repository (current_repository_merged.rpd) is exactly the same as my modified repository (modified_repository.rpd) like the merge lost the objects of the first current repository. It seems like the merged repository is equals to modified repository.
    Probably a bad operation of my part ?
    Does anybody have the same problem and succeed to solve it ?
    Thanks in advance for your times.
    B.Duclos

  • Audit data from Identity Mgr repository

    Hi,
    I want to get the data like, If I create an account for a user and assign him certain resources today (say date is a) and assign him some resources.These resources need to be approved,they are approved after two days (say date is b).The is created at that time.
    Any clue How I could the date when request was made,
    Is there any way or workflow service op that lets us get the audit data for a user from repository.
    Thanks,
    Pandu

    In the Audit Vault Server infrastructure, there are a number of objects that are used to store the audit data from source databases. The Agent/Collector continually extracts and sends the audit data from the source databases into the Audit Vault repository. One of the tables that stores the 'raw' audit data is av$rads_flat which should never be externalized, changed, or manipulated by anyone.
    Out of the box, a job runs every 24 hours to transform or normalize the raw audit data into the warehouse structure that the reports are run against. The warehouse tables are published in the Audit Vault Auditor's documentation in which you can run your own report tool or other plsql to analyze the audit data.
    What do you want to do with the raw audit data?
    Thanks, Tammy

  • Can Multiple users work on the same work Repository ?

    I have master repository and work repository on one machine, can multiple developers connect andwork on the same work repository? how?

    oh Yes!
    it is v simple.
    follow the steps:-
    once master and work repository has been created on a system.U just need to know all the information supplied wen creating a login to designer.
    like user name and password of database,url,driver as well as master's repository uname and password.
    if u have the following information with you,then u can create a new login with designer provided all the above information and u will have full access to designer
    in work repository u want to connect

  • How can I move the ODI Work Repository from one server to another server?

    How can I move the ODI Work Repository from one server to another server?

    Hi,
    If you would like to move your source models, target models and project contents from Work repository 1 to another work repository.
    I.e. Dev. server to Prod Server.
    1. Firstly, replicate the master repository connections i.e. with same naming conventions manually
    2. Go to Dev. Server work repository -> File Tab -> Click on Export work repository (save it in a folder)
    3. After exporting, you can view the xml files in the folders.
    4. Now, Open the Prod. server and make sure you already replicated mas. rep. details.
    5. Now, right click on model and import source model in synonym mode insert_update (select source model from the folder where your xml file located)
    6. Similarily, import again target then Project.
    Now, check. It should work.
    Thank you.

  • Is there a way to create a local package repository

    Is there a way to create a local package repository without technically being a mirror.  For example, setting up multiple AL box's on my network and having them grab all the latest packages from one AL box?
    Thanks,
    Craig

    What you most likely want is an ABS tree of your own, containing only the PKGBUILDs of those packages which you want to be included in your repository.
    You should already have heard of the gensync program. In short, the parameters are the root of PKGBUILDs, sorted in subdirectories (ie. like the ABS tree), the intented name and location of the repository database file, and the directory containing the binary packages.
    Let's assume you downloaded the current ABS tree to your hard drive, as well as all matching (same version as in the PKGBUILDs!) packages from a mirror, but you don't want the reiserfsprogs package in your repository. To achieve that, you must remove the /var/abs/base/reiserfsprogs directory, and may optionally remove the binary package, too. Since gensync analyzes the ABS tree you supplied as a parameter, removing the subdirectory of a specific package will cause this very package to not be included in the generated database. Assuming your packages lie in /home/arch/i686/current, your gensync call would look like this:
    gensync /var/abs /home/arch/i686/current/current.db.tar.gz /home/arch/i686/current
    If there are any discrepancies like
      - PKGBUILD, but no matching binary package found
      - PKGBUILD and binary package versions do not match
      - permission problems (writing the db file must be possible)
    gensync will gladly complain.
    Otherwise you should find the db file in the place you specified. Keep in mind that the name of the db.tar.gz file must be equal to the repository tag in the pacman.conf to use the repo.
    To make sure the db contains the right packages; use
    tar -tzf current.db.tar.gz | less
    to list the contents. Every package has it's own subdirectory including the metadata, which is rather obvious considering the file's generated from such a structure in the first place.
    The binary packages along with a correctly generated db file are all you need. Make the repository directory containing these files available through FTP if local availability doesn't cut it for you, edit your pacman.conf if needed, and use it!
    Adding packages works similar; All you need to have is the PKGBUILD in an ABS-like tree (it doesn't have to be the official tree; gensync doesn't care where the files come from. Just stick to one subdirectory per PKGBUILD, and you'll be fine), and the matching packages somewhere else, run gensync with the appropriate directories, and cackle with glee.
    HTH.

  • How to create a repository(not just custom) using your hard drive

    I don't know if many people know about this, so I am giving this a shot. There are three major articles on wiki.archlinux.org: Custom local repository,
    Using a CD-ROM as a repository, and Offline Installation of Packages. These are available online through the WIKIs at archlinux.org.
    I was first confused because when I was reading "Offline installation of packages", I didn't know what these ".db.tar.gz" files where. I came mainly from a Debian / Ubuntu
    background (I actually tried many distros before this), so getting used to the way the repository works and no graphical install manager for it. However, I enjoyed a challenge and
    I found out that these are database packages that contain descriptions and locations on where these files are located. The ones on the ftp server are already compiled. I don't know if,
    however they are compiled with the most recent versions.
       With all that said, I thought you had to have it all in one directory in order for this to work, but as it turns out, location is not really an issue. I decided to have a directory reside on the root.
    I chose root because it's only for the install of my own packages. I could have done it as a seperate user account, such as "repos" in PCLinuxOS (another distro I tried). I didn't want to have a seperate account for this. Therefore, I created "/root/repository". Within this directory I created directories for all repository archives. I basically did a "cd /mnt/dvd" and migrated to the particular repository directories. I would copy all the "pkg.tar.gz" files into their respective directories with "cp * ~/repository/<name-of-dir>". For intance, I started with the "core" directory, because there was some things I didn't install in the core directory during installation and if the packages needed it, it was there. This follows for the rest of the directories, such as "community", "testing", and "unstable", etc.You can go to the ftp mirrors to find out what directories are available. The main point is that your files should be in the format ".pkg.tar.gz". These are package files that get converted into a sort of database format that as I mentioned, informs the system the description and where the files are located, and so on.
       The command to perform this, is "tar -xvf /root/repository/core/core.db.tar.gz *.pkg.tar.gz". You can replace core with whatever repository you are adding. So, for example, "extra.db.tar.gz" would be in the "extra" directory. This information is located in the "Offline installation of packages".  The command to create this database is called, "repo-add".
    The format for this command is "repo-add /path/to/dir.db.tar.gz *.pkg.tar.gz". So, if it's the core packages you would "cd ~/repository/core" and "repo-add core.db.tar.gz *.pkg.tar.gz".
      Then, you need to edit the "/etc/pacman.conf" configuration file for pacman. I basically would comment all out except for the repositories I need. So, for example "[core]" and "/etc/pacman.d/core" would tell where normally the servers are located for these files. This information is located int the "Custom local repository" article.using the "repo-add" command.
       Furthermore, I edited each server file located in "/etc/pacman.d/<repository>" where repository is core, extra, etc. I would perform,  "nano /etc/pacman.d/core" for example and comment out all servers. I then add a "local repository" by typing in "file:///root/repository/core", saved it, and then did a "pacman -Sy" to update the repository database. Now, I can do "pacman -S <package-name>" where package-name is whatever I wanted to install. Voila! Please let me know of any suggestions, questions, insights, or comments. I hope I'm not missing anything in this article. I do remember using "rm -rf * in the "/var/lib/pacman/<repository>"directories and using "tar xvf <repository>.db.tar.gz". I don't if that something to do with it, though. Be careful with the "rm -rf *" command, because you can erase your hard drive if you are not careful, for those who aren't informed.
    P.S. Please note all these are done with the root user.

    pressh wrote:
    gradgrind wrote:
    smitty wrote:pressh, I understand and appreciate your point of view... well taken! Are you implying that I should have written in steps, such as 1, 2, and 3? Also, should I have got ridden of the redundant information if it is contained in the Wiki article and / or  taken out the commands on how to apply them and left only with the explanation? Is this what you imply? Sorry if I seem redundant with these questions, but I'm curious so I can improve for the future. I am new to this and open to any suggestion and comments.
    Maybe you could either edit the existing wiki pages where they were not clear to you, or else add a new wiki page, or both. Certainly give the whole a clearer (visual) structure, and (if they don't already exist) add links between the connected wiki pages.
    Yes that is partly what I mean. Further you could get rid of the information that is not really needed to follow the guide (for example what the command 'repo-add' does. People could if they are interested look it up in the script itself, or you could add it here and link to it).
    And yes a bit of structure would be nice. You don't have to nessesarily call it 1,2,3, as long as it has some kind of structure in it (the visual point is very important here). You could take a look at existing wiki pages on the web and see how most of them (not all of them are good of course) are structured.
    That's a good point, too. How do I found out what articles are more effective? I am doing research on this particular matter at the moment and came across articles that have tips on technical writing. Could this help in the long run? Or, is it better to get feedback from other users and improve that way? In other words, do first, and ask later, as one user point out?

  • I would like to know why my bill was manipulated to cause an overage fee.

    Halfway through my previous billing cycle, I reduced my data plan from 3GB shared between me and my girlfriend to 2GB shared, so they pro-rated the next bill, showing it to be around $30 cheaper than it would otherwise be. Fast forward to today, I go to check my bill and see that I've incurred a $15 usage fee. However, when I go look at the Usage breakdown, I see the following:
    It clearly shows that the billing cycle started on May 20th, and that I made the change to go down to 2GB on the 30th. They decided to set my usage allowance for that period to 1.06400GB even though I'd already used 1.37900GB, leaving a .31800GB overage. Now, of course it shows that from May 31st through June 19th, I was allowed 1.29000GB and used 1.21100GB, clearly showing that even if it were split properly, I'd still be over.....except for the fact that I have text alerts set to notify me when I'm at 50%, 75%, and 90% usage so I can decide when to stop and avoid an overage fee.
    They seemingly, intentionally, manipulated the numbers in such a way as to get $15 more out of me that they otherwise wouldn't get because they knew I'd stop using data before an overage occurred.
    I am fully aware that $15 isn't a big deal. However, the fact that they apparently manipulated the numbers in such a way as to force an overage fee upon me is just plain wrong and it makes me very angry. The best part about this is that when you add up the 1.06400GB allowance of the first portion and the 1.29000GB of the second portion, that only totals 2.354GB, not the 3GB that the plan originally was. I had chosen to have the changes apply at the beginning of the next billing cycle, so there is no reason for me to have been shorted the 0.646GB of data, much less to have incurred an overage fee.
    Is this sort of thing common? Is it a glitch in the system, or was someone manually making these changes with malicious intent? More importantly, can or will they do anything to fix this?Verizon Wireless Customer Support

    See, the problem is you're both ignoring two facts.
    #1 - of the two options when changing the plan, I chose to have the changes not apply until the beginning of the next billing cycle. There never should have been the possibility of an overage as there should have been no prorating when I chose to have no changes until next billing cycle. The plan I was on was a 3GB plan, I had only used a total of 2.59GB. Again, that's aside from the fact that there shouldn't have been a prorating to begin with as I chose to have the new plan not apply until the next billing cycle.
    #2 - The total allowance between the two pieces of prorated data allowances is only 2.354GB, not the 3GB of the plan I was currently on. Yet again, I don't know how many more times I have to say this for it to be understood, I chose to have the changes not apply until the next billing cycle. This means that the plan should have never been split to begin with, let alone shorted 0.646GB of usage allowance.
    I'm sorry if I'm being short with you, but this is a clear-cut case of an error in the process and I do not appreciate being told I am mistaken when you haven't even bothered to pay attention to all the details and check the math for yourself.

Maybe you are looking for

  • BPEL Designer installation for eclipse

    How do I get the BPEL designer installation for eclipse, since I cou;dnt find it anywhere, I can download the installation for BPEL Process Manager but not the designer Thanks Rasika

  • I'm using the trial version

    I did some face swap, now I would like to use the Auto layer blend option, but it's not highlighted under the selection. What can i do to use this option. Like the colors to match. Any advice is desperately needed asap pls !!!!

  • Output of PBEwithMD5andDES

    Hi all! In what form is the encrypted message? I wonder if it is ASCII subset or is it random bytes and I have to encode it somehow to send it over in XML? The problem: I'd like to encode something and send that wihtin XML document. Obviously, if the

  • HT201250 I selected old mail in time machine and it will not open. How can I open it?

    I selected old mail in time machine and it will not open. How can I open it?

  • Can't Find my Photo Book Project?!

    I was in the middle of adding pictures to my photo book project and it gave me the little round "I'm thinking" Icon and it was taking forever, it sat for more than 5 minutes so I did a hard shut down. When I restarted my Mac my whole project was gone