全志a20 烧录内核 sdlinux内核3.4怎样换到3.3

下次自动登录
现在的位置:
& 综合 & 正文
android 全志a10(2.3.4)开发二(linux内核编译及源码编译)
《android 全志a10(2.3.4)开发二》文档详细可下载地址:/forum.php?mod=viewthread&tid=504&extra=
编译源码注意事项
注意:请用户在首次将工程下载到本地进行编译时请从git获取未包含任何编译后文件的,虽然每次通过makeclean命令能够清楚编译生成的中间文件,但是该makefile仍然存在不能完全清除中间文件的情况;我自己就因为代码是拷贝别人电脑上的而导致屡次编译失败,无法分析错误原因的情况,在android源码编译中该情况更应该得到重视;中间文件未清楚会导致在编译时出现各种未知的错误提示;
源码结构说
说明:source tree, 必须严格按照这个目录结构存放.缺省是 ~/workspace/exdroid
|-- android2.3.4
---android源码
|-- lichee
---linux内核
|-- buildroot
|-- build.sh
|-- linux-2.6.36
---生成镜像文件的目录,编译成功就会生成此目录
|-- crane_pack_src
|-- crane-win-v2
|-- LogoGen
|-- pack_25
`-- production
说明:在存放文件时,android2.3.4源码,lichee必须在同级目录,因为在android源码中通过相对路径的方式调用内核编译的部分头文件,镜像文件;例如:
进入lichee
进入lichee/linux-2.6.36目录
先执行 make clean 清除之前
回到上一目录cd ..
再执行命令:./build.sh -psun4i_crane进行编译
编译顺利的话,您应该看到生成lichee/out,目录里面有生成的内容,其中bImage就是生成的内核镜像,如图:
常见错误及解决方案
3.1 错误一
解决方法:到
lichee/linux-2.6.36/modules/wifi/usi-bcm.248.15/open-src/src/dhd/linux
1.删除dhd-cdc-sdmmc-gpl-2.6.36-android目录
2.进入lichee/linux-2.6.36目录,先执行 make clean 清除之前
3.再执行命令:./build.sh-p sun4i_crane进行编译
Android2.3.4源码编译
首先从git上下载源码,这样能够得到一份未编译过的,干净的代码库;
执行 cd android2.3.4 进入该目录;
source build/envsetup.sh
编译大概1小时30分钟后可编译完成
最后在根目录中生产out目录,如图
&&&&推荐文章:
【上篇】【下篇】1710人阅读
linux内核与驱动(19)
最近闲来无事情做,想到以前项目中遇到串口硬件流控制的问题,蓝牙串口控制返回错误,上层读写串口buffer溢出的问题等,也折腾了一阵子,虽然最终证明与串口驱动无关,但是排查问题时候毫无疑问会查看串口驱动的相关代码,所以把串口驱动的流程过了一遍,方便以后再用到时拿来用。分析的是全志代码A20。直接开始代码分析吧。
串口驱动代码在linux-3.3/drivers/tty/serial目录下,全志把自己平台相关的代码集中到了一个文件中,叫sw_uart.c,那就从它的__init开始了:
static int __init sw_uart_init(void)
struct sw_uart_pdata *
SERIAL_MSG(&driver initializied\n&);
ret = sw_uart_get_devinfo();
if (unlikely(ret))
ret = uart_register_driver(&sw_uart_driver);
if (unlikely(ret)) {
SERIAL_MSG(&driver initializied\n&);
for (i=0; i&SW_UART_NR; i++) {
pdata = &sw_uport_pdata[i];
if (pdata-&used)
platform_device_register(&sw_uport_device[i]);
return platform_driver_register(&sw_uport_platform_driver);
sw_uart_get_devinfo是解析全志的sys配置脚本中的串口配置,一共有八个串口,要用到那个直接在sys脚本中设置1就行了,这个用过全志平台的都知道
接着sw_uart_driver结构体定义如下:
static struct uart_driver sw_uart_driver = {
.owner = THIS_MODULE,
.driver_name = &sw_serial&,
.dev_name = &ttyS&,
.nr = SW_UART_NR,
.cons = SW_CONSOLE,
这里SW_UART_NR为8,ttyS就是将要显示在/dev/目录下的名字了,从0~7
接着看uart注册函数,顾名思义是把全志自己平台的串口注册到串口核心serial_core中去:
int uart_register_driver(struct uart_driver *drv)
struct tty_driver *
BUG_ON(drv-&state);
* Maybe we should be using a slab cache for this, especially if
* we have a large number of ports to handle.
drv-&state = kzalloc(sizeof(struct uart_state) * drv-&nr, GFP_KERNEL);
if (!drv-&state)
normal = alloc_tty_driver(drv-&nr);
if (!normal)
drv-&tty_driver =
normal-&owner
normal-&driver_name = drv-&driver_
normal-&name
= drv-&dev_//名字为ttyS
normal-&major
normal-&minor_start = drv-&
normal-&type
= TTY_DRIVER_TYPE_SERIAL;
normal-&subtype
= SERIAL_TYPE_NORMAL;
normal-&init_termios
= tty_std_
normal-&init_termios.c_cflag = B9600 | CS8 | CREAD | HUPCL | CLOCAL;
normal-&init_termios.c_ispeed = normal-&init_termios.c_ospeed = 9600;
normal-&flags
= TTY_DRIVER_REAL_RAW | TTY_DRIVER_DYNAMIC_DEV;
normal-&driver_state
tty_set_operations(normal, &uart_ops);
* Initialise the UART state(s).
for (i = 0; i & drv-& i++) {
struct uart_state *state = drv-&state +
struct tty_port *port = &state-&
tty_port_init(port);
port-&ops = &uart_port_
port-&close_delay
= HZ / 2; /* .5 seconds */
port-&closing_wait
= 30 * HZ;/* 30 seconds */
retval = tty_register_driver(normal);
if (retval &= 0)
put_tty_driver(normal);
out_kfree:
kfree(drv-&state);
return -ENOMEM;
先列出来吧,以用到时候再回来看,这里先创建了NR个state, 并为每个state做一些初始化,但是这些state还没有和端口(uart_port)对应起来;初始化完port口后,调用tty_register_driver:
int tty_register_driver(struct tty_driver *driver)
void **p = NULL;
struct device *d;
if (!(driver-&flags & TTY_DRIVER_DEVPTS_MEM) && driver-&num) {
p = kzalloc(driver-&num * 2 * sizeof(void *), GFP_KERNEL);
return -ENOMEM;
if (!driver-&major) {
error = alloc_chrdev_region(&dev, driver-&minor_start,
driver-&num, driver-&name);
if (!error) {
driver-&major = MAJOR(dev);
driver-&minor_start = MINOR(dev);
dev = MKDEV(driver-&major, driver-&minor_start);
error = register_chrdev_region(dev, driver-&num, driver-&name);
if (error & 0) {
driver-&ttys = (struct tty_struct **)p;
driver-&termios = (struct ktermios **)(p + driver-&num);
driver-&ttys = NULL;
driver-&termios = NULL;
cdev_init(&driver-&cdev, &tty_fops);
driver-&cdev.owner = driver-&
error = cdev_add(&driver-&cdev, dev, driver-&num);
if (error) {
unregister_chrdev_region(dev, driver-&num);
driver-&ttys = NULL;
driver-&termios = NULL;
mutex_lock(&tty_mutex);
list_add(&driver-&tty_drivers, &tty_drivers);
mutex_unlock(&tty_mutex);
if (!(driver-&flags & TTY_DRIVER_DYNAMIC_DEV)) {
for (i = 0; i & driver-& i++) {
d = tty_register_device(driver, i, NULL);
if (IS_ERR(d)) {
error = PTR_ERR(d);
proc_tty_register_driver(driver);
driver-&flags |= TTY_DRIVER_INSTALLED;
这里alloc_chrdev_region是动态分配了主从设备号,接着cdev_init,它file_operations结构提和它关联了起来,以后我们open /dev/ttyS节点时候会调用他的open函数,先看看这个结构体:
static const struct file_operations tty_fops = {
= no_llseek,
= tty_read,
= tty_write,
= tty_poll,
.unlocked_ioctl = tty_ioctl,
.compat_ioctl
= tty_compat_ioctl,
= tty_open,
= tty_release,
= tty_fasync,
接着cdev_add就把file_operations和设备号关联起来了,我们现在还没有创建设备节点,不过看到有为driver-&major,driver-&minor_start赋值的,后面创建节点就是用这两个主从设备号。接着list_add把这个tty_driver添加到链表中来,方便后续查找。
接着if语句判断TTY_DRIVER_DYNAMIC_DEV标志,我们前面有赋值:
&&&& normal-&flags&&&&&& = TTY_DRIVER_REAL_RAW | TTY_DRIVER_DYNAMIC_DEV;
所以这里的if条件不成立的,最后创建proc下的节点,就返回了。按我的理解,tty_register_driver是注册了一个tty的驱动,这个驱动有了逻辑能力,但是这个时候这个驱动还没有对应任何设备,所以后续还要添加对应的端口(也就是芯片的物理串口),并创建/dev/下的设备节点,上层用tty_driver驱动的逻辑来操作对应的端口。
回到sw_uart.c中,继续__init函数,platform_device_register函数,如果sys配置文件中那个串口配置了1,才会注册相应的平台设备;
接着platform_driver_register,看它的probe函数了:
static int __devinit sw_uart_probe(struct platform_device *pdev)
u32 id = pdev-&
struct uart_port *
struct sw_uart_port *sw_
struct clk *
int ret = -1;
if (unlikely(pdev-&id & 0 || pdev-&id &= SW_UART_NR))
return -ENXIO;
port = &sw_uart_port[id].
port-&dev = &pdev-&
sw_uport = UART_TO_SPORT(port);
sw_uport-&id =
sw_uport-&ier = 0;
sw_uport-&lcr = 0;
sw_uport-&mcr = 0;
sw_uport-&fcr = 0;
sw_uport-&dll = 0;
sw_uport-&dlh = 0;
/* request system resource and init them */
ret = sw_uart_request_resource(sw_uport);
if (unlikely(ret)) {
SERIAL_MSG(&uart%d error to get resource\n&, id);
return -ENXIO;
apbclk = clk_get(&pdev-&dev, CLK_SYS_APB1);
if (IS_ERR(apbclk)) {
SERIAL_MSG(&uart%d error to get source clock\n&, id);
return -ENXIO;
ret = clk_set_parent(sw_uport-&mclk, apbclk);
if (ret) {
SERIAL_MSG(&uart%d set mclk parent error\n&, id);
clk_put(apbclk);
return -ENXIO;
port-&uartclk = clk_get_rate(apbclk);
clk_put(apbclk);
port-&type = PORT_SW;
port-&flags = UPF_BOOT_AUTOCONF;
port-&mapbase = sw_uport-&pdata-&
port-&irq = sw_uport-&pdata-&
platform_set_drvdata(pdev, port);
#ifdef CONFIG_PROC_FS
sw_uart_procfs_attach(sw_uport);
SERIAL_DBG(&add uart%d port, port_type %d, uartclk %d\n&,
id, port-&type, port-&uartclk);
return uart_add_one_port(&sw_uart_driver, port);
sw_uart_request_resource是申请配置GPIO,接着uart_add_one_port:
int uart_add_one_port(struct uart_driver *drv, struct uart_port *uport)
struct uart_state *
struct tty_port *
int ret = 0;
struct device *tty_
BUG_ON(in_interrupt());
if (uport-&line &= drv-&nr)
return -EINVAL;
state = drv-&state + uport-&//state是在函数 uart_register_driver中kmalloc初始化
port = &state-&
mutex_lock(&port_mutex);
mutex_lock(&port-&mutex);
if (state-&uart_port) {
ret = -EINVAL;
state-&uart_port =
state-&pm_state = -1;
uport-&cons = drv-&
uport-&state =
* If this port is a console, then the spinlock is already
* initialised.
if (!(uart_console(uport) && (uport-&cons-&flags & CON_ENABLED))) {
spin_lock_init(&uport-&lock);
lockdep_set_class(&uport-&lock, &port_lock_key);
uart_configure_port(drv, state, uport);
* Register the port whether it's detected or not.
This allows
* setserial to be used to alter this ports parameters.
tty_dev = tty_register_device(drv-&tty_driver, uport-&line, uport-&dev);
if (likely(!IS_ERR(tty_dev))) {
device_set_wakeup_capable(tty_dev, 1);
printk(KERN_ERR &Cannot register tty device on line %d\n&,
uport-&line);
* Ensure UPF_DEAD is not set.
uport-&flags &= ~UPF_DEAD;
mutex_unlock(&port-&mutex);
mutex_unlock(&port_mutex);
这个函数看名字就猜到是为uart_driver添加端口,前面说过state的状态还没有和uart_port对应起来,那么这里state-&uart_port = uport就对应了,port的配置就不去关心它了,这样,uart_driver就可以通过port对ops结构体的操作函数控制底层了,最后调用tty_register_device:
struct device *tty_register_device(struct tty_driver *driver, unsigned index,
struct device *device)
char name[64];
dev_t dev = MKDEV(driver-&major, driver-&minor_start) +
if (index &= driver-&num) {
printk(KERN_ERR &Attempt to register invalid tty line number &
& (%d).\n&, index);
return ERR_PTR(-EINVAL);
if (driver-&type == TTY_DRIVER_TYPE_PTY)
pty_line_name(driver, index, name);
tty_line_name(driver, index, name);
return device_create(tty_class, device, dev, NULL, name);
这里的tty_line_name函数定义如下:
static void tty_line_name(struct tty_driver *driver, int index, char *p)
sprintf(p, &%s%d&, driver-&name, index + driver-&name_base);//名字在uart_register_driver中赋值
可以看到就是前面说的名字ttyS0~ttyS7。
这样,如果上层open节点,会调用到前面的file_operations结构体函数tty_open,这是tty核心层的调用:
static int tty_open(struct inode *inode, struct file *filp)
struct tty_struct *
int noctty,
struct tty_driver *driver = NULL;
dev_t device = inode-&i_
unsigned saved_flags = filp-&f_
nonseekable_open(inode, filp);
retry_open:
retval = tty_alloc_file(filp);
if (retval)
return -ENOMEM;
noctty = filp-&f_flags & O_NOCTTY;
retval = 0;
mutex_lock(&tty_mutex);
tty_lock();
tty = tty_open_current_tty(device, filp);
...................................
if (tty-&ops-&open)
retval = tty-&ops-&open(tty, filp);
........................................
schedule();
* Need to reset f_op in case a hangup happened.
tty_lock();
if (filp-&f_op == &hung_up_tty_fops)
filp-&f_op = &tty_
tty_unlock();
goto retry_
.................................
可以看到,首先查找tty_driver链表,找到前面添加的tty_driver,然后调用他的ops-&open函数,这个ops赋值是在前面的uart_register_driver函数中:
tty_set_operations(normal, &uart_ops);
所以进入uart_ops结构体的open函数,这里就是从tty核心转到serial核心,往下走了一层:
static int uart_open(struct tty_struct *tty, struct file *filp)
struct uart_driver *drv = (struct uart_driver *)tty-&driver-&driver_
int retval, line = tty-&
struct uart_state *state = drv-&state +
struct tty_port *port = &state-&
................................
* Start up the serial port.
retval = uart_startup(tty, state, 0);
    .....................................
uart_startup:
static int uart_startup(struct tty_struct *tty, struct uart_state *state,
int init_hw)
struct tty_port *port = &state-&
if (port-&flags & ASYNC_INITIALIZED)
* Set the TTY IO error marker - we will only clear this
* once we have successfully opened the port.
set_bit(TTY_IO_ERROR, &tty-&flags);
retval = uart_port_startup(tty, state, init_hw);
if (!retval) {
set_bit(ASYNCB_INITIALIZED, &port-&flags);
clear_bit(TTY_IO_ERROR, &tty-&flags);
} else if (retval & 0)
retval = 0;
uart_port_startup:
static int uart_port_startup(struct tty_struct *tty, struct uart_state *state,
int init_hw)
struct uart_port *uport = state-&uart_
struct tty_port *port = &state-&
int retval = 0;
    retval = uport-&ops-&startup(uport);
    if (retval == 0) {
        if (uart_console(uport) && uport-&cons-&cflag) {
            tty-&termios-&c_cflag = uport-&cons-&
            uport-&cons-&cflag = 0;
        }
        /*
         * Initialise the hardware port settings.
         */
        uart_change_speed(tty, state, NULL);
        if (init_hw) {
            /*
             * Setup the RTS and DTR signals once the
             * port is open and ready to respond.
             */
            if (tty-&termios-&c_cflag & CBAUD)
                uart_set_mctrl(uport, TIOCM_RTS | TIOCM_DTR);
        }
        if (port-&flags & ASYNC_CTS_FLOW) {
            spin_lock_irq(&uport-&lock);
            if (!(uport-&ops-&get_mctrl(uport) & TIOCM_CTS))
                tty-&hw_stopped = 1;
            spin_unlock_irq(&uport-&lock);
        }
    }
可以看到,最终调用了&uport-&ops-&startup,这样就从serical核心层往下走到了平台的串口驱动层,也就是最底层的驱动了,这个函数定义在sw_uart.c的sw_uart_port结构体中:
static struct sw_uart_port sw_uart_port[] = {
{ .port = { .iotype = UPIO_MEM, .ops = &sw_uart_ops, .fifosize = 64, .line = 0, },
.pdata = &sw_uport_pdata[0], },
{ .port = { .iotype = UPIO_MEM, .ops = &sw_uart_ops, .fifosize = 64, .line = 1, },
.pdata = &sw_uport_pdata[1], },
{ .port = { .iotype = UPIO_MEM, .ops = &sw_uart_ops, .fifosize = 64, .line = 2, },
.pdata = &sw_uport_pdata[2], },
{ .port = { .iotype = UPIO_MEM, .ops = &sw_uart_ops, .fifosize = 64, .line = 3, },
.pdata = &sw_uport_pdata[3], },
{ .port = { .iotype = UPIO_MEM, .ops = &sw_uart_ops, .fifosize = 64, .line = 4, },
.pdata = &sw_uport_pdata[4], },
{ .port = { .iotype = UPIO_MEM, .ops = &sw_uart_ops, .fifosize = 64, .line = 5, },
.pdata = &sw_uport_pdata[5], },
{ .port = { .iotype = UPIO_MEM, .ops = &sw_uart_ops, .fifosize = 64, .line = 6, },
.pdata = &sw_uport_pdata[6], },
{ .port = { .iotype = UPIO_MEM, .ops = &sw_uart_ops, .fifosize = 64, .line = 7, },
.pdata = &sw_uport_pdata[7], },
看他的.startup函数:
static int sw_uart_startup(struct uart_port *port)
struct sw_uart_port *sw_uport = UART_TO_SPORT(port);
SERIAL_DBG(&start up ...\n&);
snprintf(sw_uport-&name, sizeof(sw_uport-&name),
&sw_serial%d&, port-&line);
ret = request_irq(port-&irq, sw_uart_irq, 0, sw_uport-&name, port);
if (unlikely(ret)) {
SERIAL_MSG(&uart%d cannot get irq %d\n&, sw_uport-&id, port-&irq);
sw_uport-&msr_saved_flags = 0;
* PTIME mode to select the THRE trigger condition:
* if PTIME=1(IER[7]), the THRE interrupt will be generated when the
* the water level of the TX FIFO is lower than the threshold of the
* TX FIFO. and if PTIME=0, the THRE interrupt will be generated when
* the TX FIFO is empty.
* In addition, when PTIME=1, the THRE bit of the LSR register will not
* be set when the THRE interrupt is generated. You must check the
* interrupt id of the IIR register to decide whether some data need to
sw_uport-&ier = SW_UART_IER_RLSI | SW_UART_IER_RDI;
#ifdef CONFIG_SW_UART_PTIME_MODE
sw_uport-&ier |= SW_UART_IER_PTIME;
这里最终就是初始化ARM芯片的寄存器操作了,可以看到申请了中断函数,后续的读写操作就和中断服务函数密切相关了。
接着open之后,看看上层的write函数调用过程,首先调用了tty核心层的write:
static ssize_t tty_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
struct inode *inode = file-&f_path.dentry-&d_
struct tty_struct *tty = file_tty(file);
struct tty_ldisc *
if (tty_paranoia_check(tty, inode, &tty_write&))
return -EIO;
if (!tty || !tty-&ops-&write ||
(test_bit(TTY_IO_ERROR, &tty-&flags)))
return -EIO;
/* Short term debug to catch buggy drivers */
if (tty-&ops-&write_room == NULL)
printk(KERN_ERR &tty driver %s lacks a write_room method.\n&,
tty-&driver-&name);
ld = tty_ldisc_ref_wait(tty);
if (!ld-&ops-&write)
ret = -EIO;
ret = do_tty_write(ld-&ops-&write, tty, file, buf, count);
tty_ldisc_deref(ld);
看这里do_tty_write函数:
static inline ssize_t do_tty_write(
ssize_t (*write)(struct tty_struct *, struct file *, const unsigned char *, size_t),
struct tty_struct *tty,
struct file *file,
const char __user *buf,
size_t count)
    ............................
for (;;) {
size_t size =
if (size & chunk)
ret = -EFAULT;
if (copy_from_user(tty-&write_buf, buf, size))
ret = write(tty, file, tty-&write_buf, size);
if (ret &= 0)
    ...............................
copy_from_user就把要写的数据拷贝到了内核空间的write_buf中来,接着write是函数指针,指向ld-&ops-&write。
这里的ld-&ops指向的是n_tty.c 中结构体:
struct tty_ldisc_ops tty_ldisc_N_TTY = {
= TTY_LDISC_MAGIC,
= &n_tty&,
= n_tty_open,
= n_tty_close,
.flush_buffer
= n_tty_flush_buffer,
.chars_in_buffer = n_tty_chars_in_buffer,
= n_tty_read,
= n_tty_write,
= n_tty_ioctl,
.set_termios
= n_tty_set_termios,
= n_tty_poll,
.receive_buf
= n_tty_receive_buf,
.write_wakeup
= n_tty_write_wakeup
所以调用了调用的是线路规程的write:
static ssize_t n_tty_write(struct tty_struct *tty, struct file *file,
const unsigned char *buf, size_t nr)
    ..........................
b++; nr--;
if (tty-&ops-&flush_chars)
tty-&ops-&flush_chars(tty);
while (nr & 0) {
c = tty-&ops-&write(tty, b, nr);
if (c & 0) {
goto break_
    ............................
从线路规程转到tty驱动层的write:
static int uart_write(struct tty_struct *tty,
const unsigned char *buf, int count)
......................
if (!circ-&buf)
spin_lock_irqsave(&port-&lock, flags);
while (1) {
c = CIRC_SPACE_TO_END(circ-&head, circ-&tail, UART_XMIT_SIZE);
if (count & c)
if (c &= 0)
memcpy(circ-&buf + circ-&head, buf, c);
circ-&head = (circ-&head + c) & (UART_XMIT_SIZE - 1);
spin_unlock_irqrestore(&port-&lock, flags);
uart_start(tty);
可以看到,把数据memcpy到环形队列中来,这样数据就保存到了该端口对应的state的xmit的buf中,这是一个环形队列。接着调用uart_start:
static void uart_start(struct tty_struct *tty)
struct uart_state *state = tty-&driver_
struct uart_port *port = state-&uart_
spin_lock_irqsave(&port-&lock, flags);
__uart_start(tty);
spin_unlock_irqrestore(&port-&lock, flags);
看来要开始传输数据了,所以对这个操作加锁了:
static void __uart_start(struct tty_struct *tty)
struct uart_state *state = tty-&driver_
struct uart_port *port = state-&uart_
if (port-&ops-&wake_peer)
port-&ops-&wake_peer(port);
if (!uart_circ_empty(&state-&xmit) && state-&xmit.buf &&
!tty-&stopped && !tty-&hw_stopped)
port-&ops-&start_tx(port);
最终调用驱动层的操作函数,也就是该端口对应的传输函数来触发数据发送:
static void sw_uart_start_tx(struct uart_port *port)
struct sw_uart_port *sw_uport = UART_TO_SPORT(port);
if (!(sw_uport-&ier & SW_UART_IER_THRI)) {
sw_uport-&ier |= SW_UART_IER_THRI;
SERIAL_DBG(&start tx, ier %x\n&, sw_uport-&ier);
serial_out(port, sw_uport-&ier, SW_UART_IER);
serial_out函数对应的就是最底层的寄存器操作了,这里#define SW_UART_IER (0x04),SW_UART_IER_THRI它对应的是使能中断,具体要参考全志的A20 CPU手册:
static inline void serial_out(struct uart_port *port, unsigned char value, int offs)
__raw_writeb(value, port-&membase + offs);
把配置写到了寄存器中,使能中断后,中断服务函数就自动把buf的数据发送出去了,前面分析看到数据是memcpy到了该端口对应的state的circ_buf结构体中的,所以进入前面open时候分析的中断服务函数中去:
static irqreturn_t sw_uart_irq(int irq, void *dev_id)
struct uart_port *port = dev_
struct sw_uart_port *sw_uport = UART_TO_SPORT(port);
unsigned int iir = 0, lsr = 0;
spin_lock_irqsave(&port-&lock, flags);
iir = serial_in(port, SW_UART_IIR) & SW_UART_IIR_IID_MASK;
lsr = serial_in(port, SW_UART_LSR);
SERIAL_DBG(&irq: iir %x lsr %x\n&, iir, lsr);
if (iir == SW_UART_IIR_IID_BUSBSY) {
/* handle busy */
SERIAL_MSG(&uart%d busy...\n&, sw_uport-&id);
serial_in(port, SW_UART_USR);
#ifdef CONFIG_SW_UART_FORCE_LCR
sw_uart_force_lcr(sw_uport, 10);
serial_out(port, sw_uport-&lcr, SW_UART_LCR);
if (lsr & (SW_UART_LSR_DR | SW_UART_LSR_BI))
lsr = sw_uart_handle_rx(sw_uport, lsr);
sw_uart_modem_status(sw_uport);
#ifdef CONFIG_SW_UART_PTIME_MODE
if (iir == SW_UART_IIR_IID_THREMP)
if (lsr & SW_UART_LSR_THRE)
sw_uart_handle_tx(sw_uport);
spin_unlock_irqrestore(&port-&lock, flags);
return IRQ_HANDLED;
我们要做的是发送操作,所以进入sw_uart_handle_tx:
static void sw_uart_handle_tx(struct sw_uart_port *sw_uport)
struct circ_buf *xmit = &sw_uport-&port.state-&
if (sw_uport-&port.x_char) {
serial_out(&sw_uport-&port, sw_uport-&port.x_char, SW_UART_THR);
sw_uport-&port.icount.tx++;
sw_uport-&port.x_char = 0;
#ifdef CONFIG_SW_UART_DUMP_DATA
sw_uport-&dump_buff[sw_uport-&dump_len++] = sw_uport-&port.x_
SERIAL_DUMP(sw_uport, &Tx&);
if (uart_circ_empty(xmit) || uart_tx_stopped(&sw_uport-&port)) {
sw_uart_stop_tx(&sw_uport-&port);
count = sw_uport-&port.fifosize / 2;
#ifdef CONFIG_SW_UART_DUMP_DATA
sw_uport-&dump_buff[sw_uport-&dump_len++] = xmit-&buf[xmit-&tail];
serial_out(&sw_uport-&port, xmit-&buf[xmit-&tail], SW_UART_THR);
xmit-&tail = (xmit-&tail + 1) & (UART_XMIT_SIZE - 1);
sw_uport-&port.icount.tx++;
if (uart_circ_empty(xmit)) {
} while (--count & 0);
SERIAL_DUMP(sw_uport, &Tx&);
if (uart_circ_chars_pending(xmit) & WAKEUP_CHARS) {
spin_unlock(&sw_uport-&port.lock);
uart_write_wakeup(&sw_uport-&port);
spin_lock(&sw_uport-&port.lock);
    ..........................
看到了,serial_out果然是取出circ_buf的buf数据,在do{}while语句中完成发送:
static inline void serial_out(struct uart_port *port, unsigned char value, int offs)
__raw_writeb(value, port-&membase + offs);
这样就把发送的数据写到了相应的寄存器中,硬件会自动完成数据发送操作
而上层read操作时候调用和write差不多,不同之处在于read是读取一个环形buf的数据,因为数据到了会产生中断,是中断服务函数自动接收数据并把它存储在buf中的;而前面写的时候是主动把数据写到buf去,所以先从中断服务函数中看是如何接收输入的:
static unsigned int sw_uart_handle_rx(struct sw_uart_port *sw_uport, unsigned int lsr)
    struct tty_struct *tty = sw_uport-&port.state-&port.
    unsigned char ch = 0;
    int max_count = 256;
   
    do {
        if (likely(lsr & SW_UART_LSR_DR)) {
            ch = serial_in(&sw_uport-&port, SW_UART_RBR);
       ..........................
        if (uart_handle_sysrq_char(&sw_uport-&port, ch))
            goto ignore_
        uart_insert_char(&sw_uport-&port, lsr, SW_UART_LSR_OE, ch, flag);
       ............................
从寄存器读取字符后赋值给ch, 接着uart_insert_char来处理该字符,其实就是把这个数据放到uart层去:
void uart_insert_char(struct uart_port *port, unsigned int status,
unsigned int overrun, unsigned int ch, unsigned int flag)
struct tty_struct *tty = port-&state-&port.
if ((status & port-&ignore_status_mask & ~overrun) == 0)
tty_insert_flip_char(tty, ch, flag);
* Overrun is special.
Since it's reported immediately,
* it doesn't affect the current character.
if (status & ~port-&ignore_status_mask & overrun)
tty_insert_flip_char(tty, 0, TTY_OVERRUN);
static inline int tty_insert_flip_char(struct tty_struct *tty,
unsigned char ch, char flag)
struct tty_buffer *tb = tty-&buf.
if (tb && tb-&used & tb-&size) {
tb-&flag_buf_ptr[tb-&used] =
tb-&char_buf_ptr[tb-&used++] =
return tty_insert_flip_string_flags(tty, &ch, &flag, 1);
当前的tty_buffer空间不够时调用tty_insert_flip_string_flags,在这个函数里会去查找下一个tty_buffer,并将数据放到下一个tty_buffer的char_buf_ptr里。
这里char_buf_ptr的数据是如何放到线路规程的read_buf中的呢?那是在tty open操作的时候,tty_init_dev -& initialize_tty_struct -& initialize_tty_struct -& tty_buffer_init:
void tty_buffer_init(struct tty_struct *tty)
spin_lock_init(&tty-&buf.lock);
tty-&buf.head = NULL;
tty-&buf.tail = NULL;
tty-&buf.free = NULL;
tty-&buf.memory_used = 0;
INIT_WORK(&tty-&buf.work, flush_to_ldisc);
可以看到初始化了工作队列的,而调用工作队列的时机是在这里操作完成后,继续sw_uart_handle_rx函数的tty_flip_buffer_push时候:
void tty_flip_buffer_push(struct tty_struct *tty)
spin_lock_irqsave(&tty-&buf.lock, flags);
if (tty-&buf.tail != NULL)
tty-&buf.tail-&commit = tty-&buf.tail-&
spin_unlock_irqrestore(&tty-&buf.lock, flags);
if (tty-&low_latency)
flush_to_ldisc(&tty-&buf.work);
schedule_work(&tty-&buf.work);
EXPORT_SYMBOL(tty_flip_buffer_push);
这里就有两种方法把数据上报给链路规程层,其实都差不多,这样数据就上报到了链路规程中,看看这个工作队列函数:
static void flush_to_ldisc(struct work_struct *work)
..........................
count = tty-&receive_
char_buf = head-&char_buf_ptr + head-&
flag_buf = head-&flag_buf_ptr + head-&
head-&read +=
spin_unlock_irqrestore(&tty-&buf.lock, flags);
disc-&ops-&receive_buf(tty, char_buf,
flag_buf, count);
............................
链路规程的receive_buf函数:
static void n_tty_receive_buf(struct tty_struct *tty, const unsigned char *cp,
char *fp, int count)
const unsigned char *p;
char *f, flags = TTY_NORMAL;
if (!tty-&read_buf)
    ................................
memcpy(tty-&read_buf + tty-&read_head, cp, i);
tty-&read_head = (tty-&read_head + i) & (N_TTY_BUF_SIZE-1);
tty-&read_cnt +=
    ...........................
if (tty-&ops-&flush_chars)
tty-&ops-&flush_chars(tty);
n_tty_set_room(tty);
可以看到,很明显,memcpy将数据拷贝到了read_buf中。
现在,再回头从上层看read函数是如何读取数据的,流程也是tty核心 -&链路规程 :
static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
unsigned char __user *buf, size_t nr)
unsigned char __user *b =
.......................................
c = tty-&read_buf[tty-&read_tail];
    ...............................
uncopied = copy_from_read_buf(tty, &b, &nr);
uncopied += copy_from_read_buf(tty, &b, &nr);
    ..............................
static int copy_from_read_buf(struct tty_struct *tty,
unsigned char __user **b,
size_t *nr)
retval = 0;
spin_lock_irqsave(&tty-&read_lock, flags);
n = min(tty-&read_cnt, N_TTY_BUF_SIZE - tty-&read_tail);
n = min(*nr, n);
spin_unlock_irqrestore(&tty-&read_lock, flags);
retval = copy_to_user(*b, &tty-&read_buf[tty-&read_tail], n);
tty_audit_add_data(tty, &tty-&read_buf[tty-&read_tail], n);
spin_lock_irqsave(&tty-&read_lock, flags);
tty-&read_tail = (tty-&read_tail + n) & (N_TTY_BUF_SIZE-1);
tty-&read_cnt -=
/* Turn single EOF into zero-length read */
if (L_EXTPROC(tty) && tty-&icanon && n == 1) {
if (!tty-&read_cnt && (*b)[n-1] == EOF_CHAR(tty))
spin_unlock_irqrestore(&tty-&read_lock, flags);
看到了copy_to_user函数,就是把read_buf的数据拷贝到了用户空间。
到这里,串口的读写流程就很清楚,一目了然了。
参考知识库
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:83531次
积分:1558
积分:1558
排名:千里之外
原创:144篇
转载:19篇
(1)(2)(1)(6)(3)(1)(2)(70)(1)(4)(1)(71)}

我要回帖

更多关于 全志a20 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信