首页
关于
标签合集
友情链接
Search
1
一些简单方面的Linux生产随机密码shell
333 阅读
2
美超微主板IPMI使用教程
304 阅读
3
Ubuntu系统开启root登陆权限
241 阅读
4
linux下502自动重启脚本
206 阅读
5
利用廉价VPS做反代,保护你的真实服务器
160 阅读
OS
促销资讯
管理系统
网站运维
网文资讯
登录
Search
标签搜索
网站架构
linux
网站运营
centos
mysql
google
nginx
ssh
apache
服务器
kloxo
vps
架构分析
PHP
特价VPS
xen
shell
数据库
lamp
vpn
装逼爱好者
累计撰写
163
篇文章
累计收到
20
条评论
首页
栏目
OS
促销资讯
管理系统
网站运维
网文资讯
页面
关于
标签合集
友情链接
搜索到
4
篇与
的结果
2013-11-20
MySql常用命令总结
如果要对用户所用的登录终端进行限制,可以更新User表中相应用户的Host字段,在进行了以上更改后应重新启动数据库服务,此时登录时可用如下类似命令:mysql -uroot -p; mysql -uroot -p密码; mysql -h主机IP -u用户 -p密码; mysql 数据库名称 -u用户 -p; mysql 数据库名称 -u用户 -p密码;上面命令参数是常用参数的一部分,详细情况可参考文档。 1:使用SHOW语句找出在服务器上当前存在什么数据库:mysql> SHOW DATABASES;2:创建一个数据库MYSQLDATAmysql> CREATE DATABASE MYSQLDATA;3:选择你所创建的数据库mysql> USE MYSQLDATA;(按回车键出现Database changed 时说明操作成功!) 4:查看现在的数据库中存在什么表mysql> SHOW TABLES;5:创建一个数据库表mysql> CREATE TABLE MYTABLE (name VARCHAR(20), sex CHAR(1));6:显示表的结构:mysql> DESCRIBE MYTABLE;7:往表中加入记录mysql> insert into MYTABLE values ("hyq","M");8:用文本方式将数据装入数据库表中(例如D:/mysql.txt)mysql> LOAD DATA LOCAL INFILE "D:/mysql.txt" INTO TABLE MYTABLE;9:导入.sql文件命令(例如D:/mysql.sql)mysql>use database; mysql>source d:/mysql.sql;导入sql第二种方法mysql -h127.0.0.1 -u用户名 -p密码 数据库< d:/mysql.sql10:删除表mysql>drop TABLE MYTABLE;11:清空表mysql>delete from MYTABLE;12:更新表中数据mysql>update MYTABLE set sex="f" where name=’hyq’;以下是无意中在网络看到的使用MySql的管理心得, 在windows中MySql以服务形式存在,在使用前应确保此服务已经启动,未启动可用net start mysql命令启动。而Linux中启动时可用"/etc/rc.d/init.d/mysqld start"命令,注意启动者应具有管理员权限。 刚安装好的MySql包含一个含空密码的root帐户和一个匿名帐户,这是很大的安全隐患,对于一些重要的应用我们应将安全性尽可能提高,在这里应把匿名帐户删除、 root帐户设置密码,可用如下命令进行:use mysql; delete from User where User=""; update User set Password=PASSWORD(’新密码’) where User=’root’;在 进行开发和实际应用中,用户不应该只用root用户进行连接数据库,虽然使用root用户进行测试时很方便,但会给系统带来重大安全隐患,也不利于管理技 术的提高。我们给一个应用中使用的用户赋予最恰当的数据库权限。如一个只进行数据插入的用户不应赋予其删除数据的权限。MySql的用户管理是通过 User表来实现的,添加新用户常用的方法有两个,一是在User表插入相应的数据行,同时设置相应的权限;二是通过GRANT命令创建具有某种权限的用 户。其中GRANT的常用用法如下:grant all on mydb.* to NewUserName@HostName identified by "password"; grant usage on *.* to NewUserName@HostName identified by "password"; grant select,insert,update on mydb.* to NewUserName@HostName identified by "password"; grant update,delete on mydb.TestTable to NewUserName@HostName identified by "password";若 要给此用户赋予他在相应对象上的权限的管理能力,可在GRANT后面添加WITH GRANT OPTION选项。而对于用插入User表添加的用户,Password字段应用PASSWORD 函数进行更新加密,以防不轨之人窃看密码。对于那些已经不用的用户应给予清除,权限过界的用户应及时回收权限,回收权限可以通过更新User表相应字段, 也可以使用REVOKE操作。全局管理权限: FILE: 在MySQL服务器上读写文件。 PROCESS: 显示或杀死属于其它用户的服务线程。 RELOAD: 重载访问控制表,刷新日志等。 SHUTDOWN: 关闭MySQL服务。 数据库/数据表/数据列权限: ALTER: 修改已存在的数据表(例如增加/删除列)和索引。 CREATE: 建立新的数据库或数据表。 DELETE: 删除表的记录。 DROP: 删除数据表或数据库。 INDEX: 建立或删除索引。 INSERT: 增加表的记录。 SELECT: 显示/搜索表的记录。 UPDATE: 修改表中已存在的记录。 特别的权限: ALL: 允许做任何事(和root一样)。 USAGE: 只允许登录–其它什么也不允许做。
2013年11月20日
10 阅读
0 评论
0 点赞
2011-07-13
Database Optimize patterns
Most of websites and enterprise application rely on the database backing them to store the application and customer data. So at some point the database could be the main performance and scalability bottleneck for your system performance, so I ‘m here today to cure this!Database supporters and resisters:Database supporters: MySQL, SQL Server, and PostgreSQL:MySQL, SQL Server, PostgreSQL, and others is hard competitors, everyone have different philosophy, and features.For example, SQL Server have rich features such as: support UTF16 in the data types, triggers, roles and policies, and integration with .NET, etc. MySQL: Free and open source, many database engines. PostgreSQL: Free and open source, support object-oriented data model.What about the performance?The Database performance is important. In the real and trusted performance benchmark: MySQL faster than SQL Server, and PostgreSQL have good performance better than SQL Server, and almost like MySQL. Also PostgreSQL, MySQL have small footprint, and consume small system resource.Database resisters: HBase, MongoDB, Redis, and others:Does all application really need databases backing them?I believe there is no option can fit in all situations, so not all applications need database. The golden wisdom (in my opinion) said: Every challenge has its own solution.There are many scenario will agreed with Database resisters, such as: Web search engine, Caching, File sharing, etc. In the other hand there are a lot of scenarios agreed with Database supporters, such as: ERP, CRM, E-Commerce, etc.Tiny URL services for example, shouldn’t use Database at all because it’s require very simple needs map a small/tiny URL to the real/big URL. But what we can use beside or instead of Databases?There are a lot of tools that fallowing CAP, BASE model, instead of ACID model. CAP model is about:* Consistency – your data is correct all the time. What you write is what you read.* Availability – you can read and write and write your data all the time.* Partition Tolerance – if one or more nodes fails the system still works and becomes consistent when the system comes on-line.For example in any in-memory or in-disk caching system you will never need all the Database features. You just need CAP like system. Today there are a lot of Documents, and key-value based store systems, such as:* MongoDB Document oriented* CouchDB Document oriented (JSON)* Redis Key-value oriented* Tokyo Cabinet Key-value oriented* Memcached in-memory, Key-value orientedThe above list can work in many scenarios, and they provide good performance more than the Databases, and most of them support distributed, partitions, and they also provide good fault tolerance algorithm.I know after all this there are still people say; Why CAP model or key-value store system, the database is good?The Database is often a performance- and scalability limiting factor in most types of applications, the reality is that relational databases have not changed much in the last 30 years, and for a lot of purposes using a database is akin to running an interstellar spaceship while having an old Volkswagen engine in the engine room!So the first lesson today is: Consider use Key-value store system depend on your needs.Database Optimizing pattern:What to store into the Database?lot of people in our industry think about the Database as fundamental, default, primitive disk based store system, but this is wrong there are too many ways to store the data; beside that you shouldn’t store anything into the Database. In other word, don’t store non-relational data in relational data model.For example if you design a photo sharing website, you shouldn’t store the thumbnails photos into the Database because simply when some user search the brewer will send a lot of requests to view this photos, and making a lot of DB connections is not good, beside this the DBMS will make this process heavy because the DBMS is complex system by nature.So you should think twice before use the Database to store BLOB data.Field data types:In any DBMS you can found many kind of data types, such as: Date, Time, DateTime, Text, BLOB, VarChar, NVarCar (in SQL Server), and so on. If you choice the right data type this will make the database use lower disks space, and the insert/retrieve will run faster. But why I choice think about it? The Database by nature resides in disks and such disks are slow, so if you minimize the size of the stored data this will improve the write and read time. For example don’t use NVarChar data type except if you really need 16 bits Unicode (UTF 16). Also be watchful when you choice Text, NText, BLOB data type because mostly it will store in another location and the row will just hold the a reference to data position, so when you retrieve the data the DBMS will do a lot of seeks.The primary key and the indexes:We usually use it to optimize the retrieve, insert, update, and delete operations in the tables. Create a primary key on each table you create and unless you are really knowledgeable enough to figure out a better plan, make it the clustered index.It’s important to minimize the primary field size and make the inserting always at the end of the table (i.e. make the value of the primary key field always increasing); I suggest use integer and auto increment in the primary key field if it’s possible.To start with create your indexes; you need to plan an indexing strategy at the same time as you plan and design the database. First, and most important, is the clustered index (Primary key). As a rule, every table in an online transactional processing (OLTP) system should get a clustered index. You can only put one clustered index on a given table, so the proper selection of exactly what column or columns to place it on is extremely important.It makes sense to leave the cluster on the primary key if that primary key provides the natural access path, the most common field used to either search the table or relate the table to another.Changing the parameters slightly, if the access path is mostly through another column, says Company Name, this may make a more appropriate clustered index. Another situation is when the table is no longer at the top of the chain, but somewhere in the middle and the most likely avenue of selection for the data is going to come through the foreign key to a parent table. Then the foreign key column becomes a good candidate for the clustered index. A further wrinkle could be added by needing to get the related data in this child table, but based on a date. This would result in a clustered index composed of two columns: the foreign key and the date. As you can see, in a few sentences a number of options were laid out. You need to think this through as you design the system.You may also identify, either during design, testing or monitoring the production system, that other indexes are needed. While multiple indexes on any given table may be needed, you need to keep in mind that each index adds overhead to the system because these values have to be maintained as the data gets modified, which includes inserts, updates and deletes. Further, since indexes are stored by pointing to the clustered index (one of the many reasons you need a clustered index on the table) changes to the clustered index can result in a cascade through all the other indexes on a table. Because of all this, while indexes are good and necessary, restraint must be exercised in their application & use.Data retrieve, SP’s, and Ad-hoc queries:When you submit Ad-hoc query to the DBMS, a number of processes on the server go to work on that query. The purpose of all these processes is to manage the system such that it will provide your data back to you, or store it, in as timely a manner as possible, whilst maintaining the integrity of the data. The processes actually take a time, some time it’s take long time depend on your query complexity.So I suggest using SP (stored procedure) because most of those steps happen statically at compile time (i.e. creation time). For example you can in .NET – LINQ to SQL to import SPs and call it, and using a few changes you can also enforce the LINQ to SQL to use SPs in any insert, update, and delete operation for better performance.Minimize the Ad-hoc queries will make you gain some performance but we can’t enforce the developers to forget this important feature. Beside create SP’s for the complex queries and the usually insert, update, delete operations; I suggest monitoring and tuning the Ad-hoc queries execute plans at stabilization and testing stage.In general try to do the following:- It’s important to return/select the columns and rows you need only.- Avid using Like operation because it’s mostly require full table scan.- Avoid NOT IN, instead use a left outer join – even though it’s often easier to visualize the NOT IN.- Consider use set nocount on at the top of each SP (and set nocount off) at the bottom.- Avid Long-Running, and the distributed transactions.Caching:Caching is fundamental thing in reading intensive application, also caching is important in writing intensive application. If your DBMS support caching system, such as Query Caching in MySQL I suggest to turn it on.Caching the not rapidly changing data such as: Logged-in user’s roles, top most posts, configuration, and Categories into the client/application server memory also is important to minimize the traveling to the server.
2011年07月13日
9 阅读
0 评论
0 点赞
2011-07-05
MySQL 5.4发布beta版
Sun在第七届MySQL展会上发布了其最新版开源数据库MySQL 5.4的技术预览版本,MySQL 5.4在性能和可伸缩性上进行了重大改进。MySQL 5.4支持InnoDB存储引擎扩展至16路x86服务器和64路CMT服务器,同时也优化了子查询和JION功能,将对特定查询的响应速度提升了90%,这些性能和可伸缩性的提升非常明显,而且不需使用额外应用程序或SQL代码。Sun软件架构和MySQL团队副总裁Karen Tegan Padir在大会的主题演讲时表示:“不需要对应用程序进行任何修改,MySQL 5.4将显著提高它们的性能和可伸缩性,MySQL 5.4也更加适用于扩展SMP系统上的部署。”MySQL 5.4新增功能和性能提升:1、可伸缩性提升:支持InnoDB存储引擎扩展至16路x86服务器和64路CMT服务器,性能提升了一倍;2、子查询优化:提高了分析查询操作的性能,相比之前版本,子查询执行时间缩短;3、新的查询运算法则:利用主存加快多路连接尤其是MySQL集群的执行速度;4、改进了存储过程:增强了SIGNAL/RESIGNAL功能的错误管理能力,应用程序可以更轻松地依赖商业逻辑的存储过程;5、完善了prepared statements:prepared statements中新增了对输出参数的支持;6、改善了信息数据库:为存储过程提供了更多的元数据存取方式,开发人员在使用ODBC和JDBC之类的连接器时可以获取更多的信息;7、改进了对DTrace的支持:提高了Solaris操作系统上的MySQL的诊断和故障排除能力。支持平台:MySQL 5.4适用于多种硬件和软件平台,包括:红帽企业版Linux(RHEL)、Novell的SuSE Enterprise Linux、微软的Windows、Sun的Solaris 10、苹果的Mac OS X、Free BSD、HP-UX、IBM AIX、IBM i5/OS和其他Linux发行版本。目前MySQl 5.4技术预览版本仅支持64位Linux和Solaris 10系统,正式版本将在今年晚些时候发布。官方下载: http://dev.mysql.com/downloads/mysql/5.4.html
2011年07月05日
15 阅读
0 评论
0 点赞
2011-07-01
How FriendFeed uses MySQL to store schema-less data
BackgroundWe use MySQL for storing all of the data in FriendFeed. Our database has grown a lot as our user base has grown. We now store over 250 million entries and a bunch of other data, from comments and “likes” to friend lists.As our database has grown, we have tried to iteratively deal with the scaling issues that come with rapid growth. We did the typical things, like using read slaves and memcache to increase read throughput and sharding our database to improve write throughput. However, as we grew, scaling our existing features to accomodate more traffic turned out to be much less of an issue than adding new features.In particular, making schema changes or adding indexes to a database with more than 10 – 20 million rows completely locks the database for hours at a time. Removing old indexes takes just as much time, and not removing them hurts performance because the database will continue to read and write to those unused blocks on every INSERT, pushing important blocks out of memory. There are complex operational procedures you can do to circumvent these problems (like setting up the new index on a slave, and then swapping the slave and the master), but those procedures are so error prone and heavyweight, they implicitly discouraged our adding features that would require schema/index changes. Since our databases are all heavily sharded, the relational features of MySQL like JOIN have never been useful to us, so we decided to look outside of the realm of RDBMS.Lots of projects exist designed to tackle the problem storing data with flexible schemas and building new indexes on the fly (e.g., CouchDB). However, none of them seemed widely-used enough by large sites to inspire confidence. In the tests we read about and ran ourselves, none of the projects were stable or battle-tested enough for our needs (see this somewhat outdated article on CouchDB, for example). MySQL works. It doesn’t corrupt data. Replication works. We understand its limitations already. We like MySQL for storage, just not RDBMS usage patterns.After some deliberation, we decided to implement a “schema-less” storage system on top of MySQL rather than use a completely new storage system. This post attempts to describe the high-level details of the system. We are curious how other large sites have tackled these problems, and we thought some of the design work we have done might be useful to other developers.OverviewOur datastore stores schema-less bags of properties (e.g., JSON objects or Python dictionaries). The only required property of stored entities is id, a 16-byte UUID. The rest of the entity is opaque as far as the datastore is concerned. We can change the “schema” simply by storing new properties.We index data in these entities by storing indexes in separate MySQL tables. If we want to index three properties in each entity, we will have three MySQL tables – one for each index. If we want to stop using an index, we stop writing to that table from our code and, optionally, drop the table from MySQL. If we want a new index, we make a new MySQL table for that index and run a process to asynchronously populate the index without disrupting our live service.As a result, we end up having more tables than we had before, but adding and removing indexes is easy. We have heavily optimized the process that populates new indexes (which we call “The Cleaner”) so that it fills new indexes rapidly without disrupting the site. We can store new properties and index them in a day’s time rather than a week’s time, and we don’t need to swap MySQL masters and slaves or do any other scary operational work to make it happen.DetailsIn MySQL, our entities are stored in a table that looks like this:CREATE TABLE entities ( added_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, id BINARY(16) NOT NULL, updated TIMESTAMP NOT NULL, body MEDIUMBLOB, UNIQUE KEY (id), KEY (updated) ) ENGINE=InnoDB; The added_id column is present because InnoDB stores data rows physically in primary key order. The AUTO_INCREMENT primary key ensures new entities are written sequentially on disk after old entities, which helps for both read and write locality (new entities tend to be read more frequently than old entities since FriendFeed pages are ordered reverse-chronologically). Entity bodies are stored as zlib-compressed, pickled Python dictionaries.Indexes are stored in separate tables. To create a new index, we create a new table storing the attributes we want to index on all of our database shards. For example, a typical entity in FriendFeed might look like this:{ "id": "71f0c4d2291844cca2df6f486e96e37c", "user_id": "f48b0440ca0c4f66991c4d5f6a078eaf", "feed_id": "f48b0440ca0c4f66991c4d5f6a078eaf", "title": "We just launched a new backend system for FriendFeed!", "link": "http://friendfeed.com/e/71f0c4d2-2918-44cc-a2df-6f486e96e37c", "published": 1235697046, "updated": 1235697046, } We want to index the user_id attribute of these entities so we can render a page of all the entities a given user has posted. Our index table looks like this:CREATE TABLE index_user_id ( user_id BINARY(16) NOT NULL, entity_id BINARY(16) NOT NULL UNIQUE, PRIMARY KEY (user_id, entity_id) ) ENGINE=InnoDB; Our datastore automatically maintains indexes on your behalf, so to start an instance of our datastore that stores entities like the structure above with the given indexes, you would write (in Python):user_id_index = friendfeed.datastore.Index( table="index_user_id", properties=["user_id"], shard_on="user_id") datastore = friendfeed.datastore.DataStore( mysql_shards=["127.0.0.1:3306", "127.0.0.1:3307"], indexes=[user_id_index]) new_entity = { "id": binascii.a2b_hex("71f0c4d2291844cca2df6f486e96e37c"), "user_id": binascii.a2b_hex("f48b0440ca0c4f66991c4d5f6a078eaf"), "feed_id": binascii.a2b_hex("f48b0440ca0c4f66991c4d5f6a078eaf"), "title": u"We just launched a new backend system for FriendFeed!", "link": u"http://friendfeed.com/e/71f0c4d2-2918-44cc-a2df-6f486e96e37c", "published": 1235697046, "updated": 1235697046, } datastore.put(new_entity) entity = datastore.get(binascii.a2b_hex("71f0c4d2291844cca2df6f486e96e37c")) entity = user_id_index.get_all(datastore, user_id=binascii.a2b_hex("f48b0440ca0c4f66991c4d5f6a078eaf")) The Index class above looks for the user_id property in all entities and automatically maintains the index in the index_user_id table. Since our database is sharded, the shard_on argument is used to determine which shard the index gets stored on (in this case, entity["user_id"] % num_shards).You can query an index using the index instance (see user_id_index.get_all above). The datastore code does the “join” between the index_user_id table and the entities table in Python, by first querying the index_user_id tables on all database shards to get a list of entity IDs and then fetching those entity IDs from the entities table.To add a new index, e.g., on the link property, we would create a new table:CREATE TABLE index_link ( link VARCHAR(735) NOT NULL, entity_id BINARY(16) NOT NULL UNIQUE, PRIMARY KEY (link, entity_id) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; We would change our datastore initialization code to include this new index:user_id_index = friendfeed.datastore.Index( table="index_user_id", properties=["user_id"], shard_on="user_id") link_index = friendfeed.datastore.Index( table="index_link", properties=["link"], shard_on="link") datastore = friendfeed.datastore.DataStore( mysql_shards=["127.0.0.1:3306", "127.0.0.1:3307"], indexes=[user_id_index, link_index]) And we could populate the index asynchronously (even while serving live traffic) with:./rundatastorecleaner.py --index=index_link Consistency and AtomicitySince our database is sharded, and indexes for an entity can be stored on different shards than the entities themselves, consistency is an issue. What if the process crashes before it has written to all the index tables?Building a transaction protocol was appealing to the most ambitious of FriendFeed engineers, but we wanted to keep the system as simple as possible. We decided to loosen constraints such that: The property bag stored in the main entities table is canonical Indexes may not reflect the actual entity values Consequently, we write a new entity to the database with the following steps: Write the entity to the entities table, using the ACID properties of InnoDB Write the indexes to all of the index tables on all of the shards When we read from the index tables, we know they may not be accurate (i.e., they may reflect old property values if writing has not finished step 2). To ensure we don’t return invalid entities based on the constraints above, we use the index tables to determine which entities to read, but we re-apply the query filters on the entities themselves rather than trusting the integrity of the indexes: Read the entity_id from all of the index tables based on the query Read the entities from the entities table from the given entity IDs Filter (in Python) all of the entities that do not match the query conditions based on the actual property values To ensure that indexes are not missing perpetually and inconsistencies are eventually fixed, the “Cleaner” process I mentioned above runs continously over the entities table, writing missing indexes and cleaning up old and invalid indexes. It cleans recently updated entities first, so inconsistencies in the indexes get fixed fairly quickly (within a couple of seconds) in practice.PerformanceWe have optimized our primary indexes quite a bit in this new system, and we are quite pleased with the results. Here is a graph of FriendFeed page view latency for the past month (we launched the new backend a couple of days ago, as you can tell by the dramatic drop):In particular, the latency of our system is now remarkably stable, even during peak mid-day hours. Here is a graph of FriendFeed page view latency for the past 24 hours:Compare this to one week ago:The system has been really easy to work with so far. We have already changed the indexes a couple of times since we deployed the system, and we have started converting some of our biggest MySQL tables to use this new scheme so we can change their structure more liberally going forward.FROM:http://bret.appspot.com/entry/how-friendfeed-uses-mysql
2011年07月01日
12 阅读
0 评论
0 点赞